00:00:00.001 Started by upstream project "autotest-per-patch" build number 126204 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.010 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.012 The recommended git tool is: git 00:00:00.012 using credential 00000000-0000-0000-0000-000000000002 00:00:00.015 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.031 Fetching changes from the remote Git repository 00:00:00.036 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.054 Using shallow fetch with depth 1 00:00:00.054 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.054 > git --version # timeout=10 00:00:00.072 > git --version # 'git version 2.39.2' 00:00:00.072 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.094 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.094 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.230 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.247 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.259 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:02.259 > git config core.sparsecheckout # timeout=10 00:00:02.270 > git read-tree -mu HEAD # timeout=10 00:00:02.288 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:02.307 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:02.308 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:02.443 [Pipeline] Start of Pipeline 00:00:02.458 [Pipeline] library 00:00:02.460 Loading library shm_lib@master 00:00:02.460 Library shm_lib@master is cached. Copying from home. 00:00:02.485 [Pipeline] node 00:00:02.502 Running on WFP20 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:02.504 [Pipeline] { 00:00:02.516 [Pipeline] catchError 00:00:02.517 [Pipeline] { 00:00:02.532 [Pipeline] wrap 00:00:02.541 [Pipeline] { 00:00:02.547 [Pipeline] stage 00:00:02.549 [Pipeline] { (Prologue) 00:00:02.728 [Pipeline] sh 00:00:03.012 + logger -p user.info -t JENKINS-CI 00:00:03.029 [Pipeline] echo 00:00:03.032 Node: WFP20 00:00:03.040 [Pipeline] sh 00:00:03.335 [Pipeline] setCustomBuildProperty 00:00:03.350 [Pipeline] echo 00:00:03.351 Cleanup processes 00:00:03.357 [Pipeline] sh 00:00:03.659 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:03.659 1890506 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:03.672 [Pipeline] sh 00:00:03.949 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:03.949 ++ grep -v 'sudo pgrep' 00:00:03.949 ++ awk '{print $1}' 00:00:03.949 + sudo kill -9 00:00:03.949 + true 00:00:03.963 [Pipeline] cleanWs 00:00:03.971 [WS-CLEANUP] Deleting project workspace... 00:00:03.971 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.977 [WS-CLEANUP] done 00:00:03.982 [Pipeline] setCustomBuildProperty 00:00:03.998 [Pipeline] sh 00:00:04.280 + sudo git config --global --replace-all safe.directory '*' 00:00:04.360 [Pipeline] httpRequest 00:00:04.380 [Pipeline] echo 00:00:04.381 Sorcerer 10.211.164.101 is alive 00:00:04.390 [Pipeline] httpRequest 00:00:04.393 HttpMethod: GET 00:00:04.394 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.394 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.396 Response Code: HTTP/1.1 200 OK 00:00:04.397 Success: Status code 200 is in the accepted range: 200,404 00:00:04.397 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.871 [Pipeline] sh 00:00:05.153 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.168 [Pipeline] httpRequest 00:00:05.188 [Pipeline] echo 00:00:05.189 Sorcerer 10.211.164.101 is alive 00:00:05.197 [Pipeline] httpRequest 00:00:05.201 HttpMethod: GET 00:00:05.201 URL: http://10.211.164.101/packages/spdk_72fc6988fe354a00b8fe81f2b1b3a44e05925c76.tar.gz 00:00:05.202 Sending request to url: http://10.211.164.101/packages/spdk_72fc6988fe354a00b8fe81f2b1b3a44e05925c76.tar.gz 00:00:05.212 Response Code: HTTP/1.1 200 OK 00:00:05.212 Success: Status code 200 is in the accepted range: 200,404 00:00:05.213 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_72fc6988fe354a00b8fe81f2b1b3a44e05925c76.tar.gz 00:00:24.874 [Pipeline] sh 00:00:25.156 + tar --no-same-owner -xf spdk_72fc6988fe354a00b8fe81f2b1b3a44e05925c76.tar.gz 00:00:27.701 [Pipeline] sh 00:00:27.978 + git -C spdk log --oneline -n5 00:00:27.978 72fc6988f nvmf: add nvmf_update_mdns_prr 00:00:27.978 97f71d59d nvmf: consolidate listener addition in avahi_entry_group_add_listeners 00:00:27.978 719d03c6a sock/uring: only register net impl if supported 00:00:27.978 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:27.978 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:27.987 [Pipeline] } 00:00:27.999 [Pipeline] // stage 00:00:28.010 [Pipeline] stage 00:00:28.013 [Pipeline] { (Prepare) 00:00:28.035 [Pipeline] writeFile 00:00:28.054 [Pipeline] sh 00:00:28.337 + logger -p user.info -t JENKINS-CI 00:00:28.351 [Pipeline] sh 00:00:28.632 + logger -p user.info -t JENKINS-CI 00:00:28.643 [Pipeline] sh 00:00:28.921 + cat autorun-spdk.conf 00:00:28.921 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.921 SPDK_TEST_FUZZER_SHORT=1 00:00:28.921 SPDK_TEST_FUZZER=1 00:00:28.921 SPDK_RUN_UBSAN=1 00:00:28.928 RUN_NIGHTLY=0 00:00:28.933 [Pipeline] readFile 00:00:28.953 [Pipeline] withEnv 00:00:28.955 [Pipeline] { 00:00:28.966 [Pipeline] sh 00:00:29.243 + set -ex 00:00:29.243 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:00:29.243 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:29.243 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.243 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:29.243 ++ SPDK_TEST_FUZZER=1 00:00:29.243 ++ SPDK_RUN_UBSAN=1 00:00:29.243 ++ RUN_NIGHTLY=0 00:00:29.243 + case $SPDK_TEST_NVMF_NICS in 00:00:29.243 + DRIVERS= 00:00:29.243 + [[ -n '' ]] 00:00:29.243 + exit 0 00:00:29.253 [Pipeline] } 00:00:29.272 [Pipeline] // withEnv 00:00:29.277 [Pipeline] } 00:00:29.294 [Pipeline] // stage 00:00:29.302 [Pipeline] catchError 00:00:29.303 [Pipeline] { 00:00:29.315 [Pipeline] timeout 00:00:29.315 Timeout set to expire in 30 min 00:00:29.316 [Pipeline] { 00:00:29.329 [Pipeline] stage 00:00:29.330 [Pipeline] { (Tests) 00:00:29.344 [Pipeline] sh 00:00:29.621 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:29.621 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:29.621 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:00:29.621 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:00:29.621 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:29.621 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:29.621 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:00:29.621 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:29.621 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:29.621 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:29.621 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:00:29.621 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:29.621 + source /etc/os-release 00:00:29.621 ++ NAME='Fedora Linux' 00:00:29.621 ++ VERSION='38 (Cloud Edition)' 00:00:29.621 ++ ID=fedora 00:00:29.621 ++ VERSION_ID=38 00:00:29.621 ++ VERSION_CODENAME= 00:00:29.621 ++ PLATFORM_ID=platform:f38 00:00:29.621 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:29.621 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:29.621 ++ LOGO=fedora-logo-icon 00:00:29.621 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:29.621 ++ HOME_URL=https://fedoraproject.org/ 00:00:29.621 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:29.621 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:29.621 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:29.621 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:29.621 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:29.621 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:29.621 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:29.621 ++ SUPPORT_END=2024-05-14 00:00:29.621 ++ VARIANT='Cloud Edition' 00:00:29.621 ++ VARIANT_ID=cloud 00:00:29.621 + uname -a 00:00:29.621 Linux spdk-wfp-20 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:29.621 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:00:32.921 Hugepages 00:00:32.921 node hugesize free / total 00:00:32.921 node0 1048576kB 0 / 0 00:00:32.921 node0 2048kB 0 / 0 00:00:32.921 node1 1048576kB 0 / 0 00:00:32.921 node1 2048kB 0 / 0 00:00:32.921 00:00:32.921 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:32.921 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:32.921 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:32.921 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:32.921 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:32.921 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:32.921 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:32.921 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:32.921 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:32.921 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:32.921 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:32.921 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:32.921 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:32.921 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:32.921 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:32.921 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:32.921 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:32.921 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:32.921 + rm -f /tmp/spdk-ld-path 00:00:32.921 + source autorun-spdk.conf 00:00:32.921 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.921 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:32.921 ++ SPDK_TEST_FUZZER=1 00:00:32.921 ++ SPDK_RUN_UBSAN=1 00:00:32.921 ++ RUN_NIGHTLY=0 00:00:32.921 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:32.921 + [[ -n '' ]] 00:00:32.921 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:32.921 + for M in /var/spdk/build-*-manifest.txt 00:00:32.921 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:32.921 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:32.921 + for M in /var/spdk/build-*-manifest.txt 00:00:32.921 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:32.921 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:32.921 ++ uname 00:00:32.921 + [[ Linux == \L\i\n\u\x ]] 00:00:32.921 + sudo dmesg -T 00:00:32.921 + sudo dmesg --clear 00:00:32.921 + dmesg_pid=1891400 00:00:32.921 + [[ Fedora Linux == FreeBSD ]] 00:00:32.921 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:32.921 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:32.921 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:32.921 + [[ -x /usr/src/fio-static/fio ]] 00:00:32.921 + export FIO_BIN=/usr/src/fio-static/fio 00:00:32.921 + FIO_BIN=/usr/src/fio-static/fio 00:00:32.921 + sudo dmesg -Tw 00:00:32.921 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:32.921 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:32.921 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:32.921 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:32.921 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:32.921 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:32.921 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:32.921 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:32.921 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:32.921 Test configuration: 00:00:32.921 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.921 SPDK_TEST_FUZZER_SHORT=1 00:00:32.921 SPDK_TEST_FUZZER=1 00:00:32.921 SPDK_RUN_UBSAN=1 00:00:32.921 RUN_NIGHTLY=0 16:17:12 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:00:32.921 16:17:12 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:32.921 16:17:12 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:32.921 16:17:12 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:32.921 16:17:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:32.921 16:17:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:32.921 16:17:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:32.921 16:17:12 -- paths/export.sh@5 -- $ export PATH 00:00:32.921 16:17:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:32.921 16:17:12 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:00:32.921 16:17:12 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:32.921 16:17:12 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721053032.XXXXXX 00:00:32.921 16:17:12 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721053032.J075tQ 00:00:32.921 16:17:12 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:32.921 16:17:12 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:32.921 16:17:12 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:00:32.922 16:17:12 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:32.922 16:17:12 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:32.922 16:17:12 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:32.922 16:17:12 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:32.922 16:17:12 -- common/autotest_common.sh@10 -- $ set +x 00:00:32.922 16:17:12 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:32.922 16:17:12 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:32.922 16:17:12 -- pm/common@17 -- $ local monitor 00:00:32.922 16:17:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:32.922 16:17:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:32.922 16:17:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:32.922 16:17:12 -- pm/common@21 -- $ date +%s 00:00:32.922 16:17:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:32.922 16:17:12 -- pm/common@21 -- $ date +%s 00:00:32.922 16:17:12 -- pm/common@25 -- $ sleep 1 00:00:32.922 16:17:12 -- pm/common@21 -- $ date +%s 00:00:32.922 16:17:12 -- pm/common@21 -- $ date +%s 00:00:32.922 16:17:12 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721053032 00:00:32.922 16:17:12 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721053032 00:00:32.922 16:17:12 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721053032 00:00:32.922 16:17:12 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721053032 00:00:32.922 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721053032_collect-cpu-temp.pm.log 00:00:32.922 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721053032_collect-vmstat.pm.log 00:00:32.922 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721053032_collect-cpu-load.pm.log 00:00:32.922 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721053032_collect-bmc-pm.bmc.pm.log 00:00:33.913 16:17:13 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:33.913 16:17:13 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:33.913 16:17:13 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:33.913 16:17:13 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:33.913 16:17:13 -- spdk/autobuild.sh@16 -- $ date -u 00:00:33.913 Mon Jul 15 02:17:13 PM UTC 2024 00:00:33.913 16:17:13 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:33.913 v24.09-pre-204-g72fc6988f 00:00:33.913 16:17:13 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:33.913 16:17:13 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:33.913 16:17:13 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:33.913 16:17:13 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:33.913 16:17:13 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:33.913 16:17:13 -- common/autotest_common.sh@10 -- $ set +x 00:00:33.913 ************************************ 00:00:33.913 START TEST ubsan 00:00:33.913 ************************************ 00:00:33.913 16:17:13 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:33.913 using ubsan 00:00:33.913 00:00:33.913 real 0m0.001s 00:00:33.913 user 0m0.000s 00:00:33.913 sys 0m0.000s 00:00:33.913 16:17:13 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:33.913 16:17:13 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:33.913 ************************************ 00:00:33.913 END TEST ubsan 00:00:33.913 ************************************ 00:00:33.913 16:17:13 -- common/autotest_common.sh@1142 -- $ return 0 00:00:33.913 16:17:13 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:33.913 16:17:13 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:33.913 16:17:13 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:33.913 16:17:13 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:00:33.913 16:17:13 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:00:33.913 16:17:13 -- common/autobuild_common.sh@432 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:00:33.913 16:17:13 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:00:33.913 16:17:13 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:33.913 16:17:13 -- common/autotest_common.sh@10 -- $ set +x 00:00:33.913 ************************************ 00:00:33.913 START TEST autobuild_llvm_precompile 00:00:33.913 ************************************ 00:00:33.913 16:17:13 autobuild_llvm_precompile -- common/autotest_common.sh@1123 -- $ _llvm_precompile 00:00:33.913 16:17:13 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:00:33.913 16:17:13 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:00:33.913 Target: x86_64-redhat-linux-gnu 00:00:33.913 Thread model: posix 00:00:33.913 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:00:33.913 16:17:13 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=16 00:00:33.913 16:17:13 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:00:33.913 16:17:13 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:00:33.913 16:17:13 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:00:33.913 16:17:13 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:00:33.913 16:17:13 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:00:33.913 16:17:13 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:00:33.913 16:17:13 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:00:33.913 16:17:13 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:00:33.913 16:17:13 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:00:34.172 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:00:34.172 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:00:34.739 Using 'verbs' RDMA provider 00:00:50.564 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:02.766 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:02.766 Creating mk/config.mk...done. 00:01:02.766 Creating mk/cc.flags.mk...done. 00:01:02.766 Type 'make' to build. 00:01:02.766 00:01:02.766 real 0m28.329s 00:01:02.766 user 0m12.318s 00:01:02.766 sys 0m15.157s 00:01:02.766 16:17:41 autobuild_llvm_precompile -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:02.766 16:17:41 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:02.766 ************************************ 00:01:02.766 END TEST autobuild_llvm_precompile 00:01:02.766 ************************************ 00:01:02.766 16:17:41 -- common/autotest_common.sh@1142 -- $ return 0 00:01:02.766 16:17:41 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:02.766 16:17:41 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:02.766 16:17:41 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:02.766 16:17:41 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:02.766 16:17:41 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:02.766 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:02.766 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:03.025 Using 'verbs' RDMA provider 00:01:16.157 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:28.366 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:28.366 Creating mk/config.mk...done. 00:01:28.366 Creating mk/cc.flags.mk...done. 00:01:28.366 Type 'make' to build. 00:01:28.366 16:18:06 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:28.366 16:18:06 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:28.366 16:18:06 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:28.366 16:18:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.366 ************************************ 00:01:28.366 START TEST make 00:01:28.366 ************************************ 00:01:28.366 16:18:06 make -- common/autotest_common.sh@1123 -- $ make -j112 00:01:28.366 make[1]: Nothing to be done for 'all'. 00:01:29.304 The Meson build system 00:01:29.304 Version: 1.3.1 00:01:29.304 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:01:29.304 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:29.304 Build type: native build 00:01:29.304 Project name: libvfio-user 00:01:29.304 Project version: 0.0.1 00:01:29.304 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:01:29.304 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:01:29.304 Host machine cpu family: x86_64 00:01:29.304 Host machine cpu: x86_64 00:01:29.304 Run-time dependency threads found: YES 00:01:29.304 Library dl found: YES 00:01:29.304 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:29.304 Run-time dependency json-c found: YES 0.17 00:01:29.304 Run-time dependency cmocka found: YES 1.1.7 00:01:29.304 Program pytest-3 found: NO 00:01:29.304 Program flake8 found: NO 00:01:29.304 Program misspell-fixer found: NO 00:01:29.304 Program restructuredtext-lint found: NO 00:01:29.304 Program valgrind found: YES (/usr/bin/valgrind) 00:01:29.304 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:29.304 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:29.304 Compiler for C supports arguments -Wwrite-strings: YES 00:01:29.304 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:29.304 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:29.304 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:29.304 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:29.304 Build targets in project: 8 00:01:29.304 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:29.304 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:29.304 00:01:29.305 libvfio-user 0.0.1 00:01:29.305 00:01:29.305 User defined options 00:01:29.305 buildtype : debug 00:01:29.305 default_library: static 00:01:29.305 libdir : /usr/local/lib 00:01:29.305 00:01:29.305 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:29.562 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:29.562 [1/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:01:29.562 [2/36] Compiling C object samples/lspci.p/lspci.c.o 00:01:29.562 [3/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:29.562 [4/36] Compiling C object samples/null.p/null.c.o 00:01:29.562 [5/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:29.562 [6/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:01:29.562 [7/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:01:29.562 [8/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:29.562 [9/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:29.562 [10/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:01:29.562 [11/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:29.562 [12/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:29.562 [13/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:29.562 [14/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:29.562 [15/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:01:29.562 [16/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:01:29.562 [17/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:29.562 [18/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:29.562 [19/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:29.562 [20/36] Compiling C object test/unit_tests.p/mocks.c.o 00:01:29.562 [21/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:29.562 [22/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:29.562 [23/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:01:29.562 [24/36] Compiling C object samples/server.p/server.c.o 00:01:29.562 [25/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:29.562 [26/36] Compiling C object samples/client.p/client.c.o 00:01:29.562 [27/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:29.563 [28/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:01:29.563 [29/36] Linking target samples/client 00:01:29.563 [30/36] Linking static target lib/libvfio-user.a 00:01:29.821 [31/36] Linking target test/unit_tests 00:01:29.821 [32/36] Linking target samples/shadow_ioeventfd_server 00:01:29.821 [33/36] Linking target samples/lspci 00:01:29.821 [34/36] Linking target samples/gpio-pci-idio-16 00:01:29.821 [35/36] Linking target samples/null 00:01:29.821 [36/36] Linking target samples/server 00:01:29.821 INFO: autodetecting backend as ninja 00:01:29.821 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:29.821 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:30.080 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:30.080 ninja: no work to do. 00:01:35.351 The Meson build system 00:01:35.351 Version: 1.3.1 00:01:35.351 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:01:35.351 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:01:35.351 Build type: native build 00:01:35.351 Program cat found: YES (/usr/bin/cat) 00:01:35.351 Project name: DPDK 00:01:35.351 Project version: 24.03.0 00:01:35.351 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:01:35.351 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:01:35.351 Host machine cpu family: x86_64 00:01:35.351 Host machine cpu: x86_64 00:01:35.351 Message: ## Building in Developer Mode ## 00:01:35.351 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:35.351 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:35.351 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:35.351 Program python3 found: YES (/usr/bin/python3) 00:01:35.351 Program cat found: YES (/usr/bin/cat) 00:01:35.351 Compiler for C supports arguments -march=native: YES 00:01:35.351 Checking for size of "void *" : 8 00:01:35.351 Checking for size of "void *" : 8 (cached) 00:01:35.351 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:35.351 Library m found: YES 00:01:35.351 Library numa found: YES 00:01:35.351 Has header "numaif.h" : YES 00:01:35.351 Library fdt found: NO 00:01:35.351 Library execinfo found: NO 00:01:35.351 Has header "execinfo.h" : YES 00:01:35.351 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:35.351 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:35.351 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:35.351 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:35.351 Run-time dependency openssl found: YES 3.0.9 00:01:35.351 Run-time dependency libpcap found: YES 1.10.4 00:01:35.351 Has header "pcap.h" with dependency libpcap: YES 00:01:35.351 Compiler for C supports arguments -Wcast-qual: YES 00:01:35.351 Compiler for C supports arguments -Wdeprecated: YES 00:01:35.351 Compiler for C supports arguments -Wformat: YES 00:01:35.351 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:35.351 Compiler for C supports arguments -Wformat-security: YES 00:01:35.351 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:35.351 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:35.351 Compiler for C supports arguments -Wnested-externs: YES 00:01:35.351 Compiler for C supports arguments -Wold-style-definition: YES 00:01:35.351 Compiler for C supports arguments -Wpointer-arith: YES 00:01:35.351 Compiler for C supports arguments -Wsign-compare: YES 00:01:35.351 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:35.351 Compiler for C supports arguments -Wundef: YES 00:01:35.351 Compiler for C supports arguments -Wwrite-strings: YES 00:01:35.351 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:35.351 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:01:35.351 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:35.351 Program objdump found: YES (/usr/bin/objdump) 00:01:35.351 Compiler for C supports arguments -mavx512f: YES 00:01:35.351 Checking if "AVX512 checking" compiles: YES 00:01:35.351 Fetching value of define "__SSE4_2__" : 1 00:01:35.351 Fetching value of define "__AES__" : 1 00:01:35.351 Fetching value of define "__AVX__" : 1 00:01:35.351 Fetching value of define "__AVX2__" : 1 00:01:35.351 Fetching value of define "__AVX512BW__" : 1 00:01:35.351 Fetching value of define "__AVX512CD__" : 1 00:01:35.351 Fetching value of define "__AVX512DQ__" : 1 00:01:35.351 Fetching value of define "__AVX512F__" : 1 00:01:35.351 Fetching value of define "__AVX512VL__" : 1 00:01:35.351 Fetching value of define "__PCLMUL__" : 1 00:01:35.351 Fetching value of define "__RDRND__" : 1 00:01:35.351 Fetching value of define "__RDSEED__" : 1 00:01:35.351 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:35.351 Fetching value of define "__znver1__" : (undefined) 00:01:35.351 Fetching value of define "__znver2__" : (undefined) 00:01:35.351 Fetching value of define "__znver3__" : (undefined) 00:01:35.351 Fetching value of define "__znver4__" : (undefined) 00:01:35.351 Compiler for C supports arguments -Wno-format-truncation: NO 00:01:35.351 Message: lib/log: Defining dependency "log" 00:01:35.351 Message: lib/kvargs: Defining dependency "kvargs" 00:01:35.351 Message: lib/telemetry: Defining dependency "telemetry" 00:01:35.351 Checking for function "getentropy" : NO 00:01:35.351 Message: lib/eal: Defining dependency "eal" 00:01:35.351 Message: lib/ring: Defining dependency "ring" 00:01:35.351 Message: lib/rcu: Defining dependency "rcu" 00:01:35.351 Message: lib/mempool: Defining dependency "mempool" 00:01:35.351 Message: lib/mbuf: Defining dependency "mbuf" 00:01:35.351 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:35.351 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:35.351 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:35.351 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:35.351 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:35.351 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:35.351 Compiler for C supports arguments -mpclmul: YES 00:01:35.351 Compiler for C supports arguments -maes: YES 00:01:35.351 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:35.351 Compiler for C supports arguments -mavx512bw: YES 00:01:35.351 Compiler for C supports arguments -mavx512dq: YES 00:01:35.351 Compiler for C supports arguments -mavx512vl: YES 00:01:35.351 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:35.351 Compiler for C supports arguments -mavx2: YES 00:01:35.351 Compiler for C supports arguments -mavx: YES 00:01:35.351 Message: lib/net: Defining dependency "net" 00:01:35.351 Message: lib/meter: Defining dependency "meter" 00:01:35.351 Message: lib/ethdev: Defining dependency "ethdev" 00:01:35.351 Message: lib/pci: Defining dependency "pci" 00:01:35.351 Message: lib/cmdline: Defining dependency "cmdline" 00:01:35.351 Message: lib/hash: Defining dependency "hash" 00:01:35.351 Message: lib/timer: Defining dependency "timer" 00:01:35.351 Message: lib/compressdev: Defining dependency "compressdev" 00:01:35.351 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:35.351 Message: lib/dmadev: Defining dependency "dmadev" 00:01:35.351 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:35.351 Message: lib/power: Defining dependency "power" 00:01:35.351 Message: lib/reorder: Defining dependency "reorder" 00:01:35.351 Message: lib/security: Defining dependency "security" 00:01:35.351 Has header "linux/userfaultfd.h" : YES 00:01:35.351 Has header "linux/vduse.h" : YES 00:01:35.351 Message: lib/vhost: Defining dependency "vhost" 00:01:35.351 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:01:35.351 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:35.351 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:35.351 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:35.351 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:35.351 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:35.351 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:35.351 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:35.351 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:35.351 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:35.351 Program doxygen found: YES (/usr/bin/doxygen) 00:01:35.351 Configuring doxy-api-html.conf using configuration 00:01:35.351 Configuring doxy-api-man.conf using configuration 00:01:35.351 Program mandb found: YES (/usr/bin/mandb) 00:01:35.351 Program sphinx-build found: NO 00:01:35.351 Configuring rte_build_config.h using configuration 00:01:35.351 Message: 00:01:35.351 ================= 00:01:35.351 Applications Enabled 00:01:35.351 ================= 00:01:35.351 00:01:35.351 apps: 00:01:35.351 00:01:35.351 00:01:35.351 Message: 00:01:35.351 ================= 00:01:35.351 Libraries Enabled 00:01:35.351 ================= 00:01:35.351 00:01:35.351 libs: 00:01:35.351 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:35.351 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:35.351 cryptodev, dmadev, power, reorder, security, vhost, 00:01:35.351 00:01:35.351 Message: 00:01:35.351 =============== 00:01:35.351 Drivers Enabled 00:01:35.351 =============== 00:01:35.351 00:01:35.351 common: 00:01:35.351 00:01:35.351 bus: 00:01:35.351 pci, vdev, 00:01:35.351 mempool: 00:01:35.351 ring, 00:01:35.351 dma: 00:01:35.351 00:01:35.351 net: 00:01:35.351 00:01:35.351 crypto: 00:01:35.351 00:01:35.351 compress: 00:01:35.351 00:01:35.351 vdpa: 00:01:35.351 00:01:35.351 00:01:35.351 Message: 00:01:35.351 ================= 00:01:35.351 Content Skipped 00:01:35.351 ================= 00:01:35.351 00:01:35.351 apps: 00:01:35.351 dumpcap: explicitly disabled via build config 00:01:35.351 graph: explicitly disabled via build config 00:01:35.351 pdump: explicitly disabled via build config 00:01:35.351 proc-info: explicitly disabled via build config 00:01:35.351 test-acl: explicitly disabled via build config 00:01:35.351 test-bbdev: explicitly disabled via build config 00:01:35.351 test-cmdline: explicitly disabled via build config 00:01:35.351 test-compress-perf: explicitly disabled via build config 00:01:35.351 test-crypto-perf: explicitly disabled via build config 00:01:35.352 test-dma-perf: explicitly disabled via build config 00:01:35.352 test-eventdev: explicitly disabled via build config 00:01:35.352 test-fib: explicitly disabled via build config 00:01:35.352 test-flow-perf: explicitly disabled via build config 00:01:35.352 test-gpudev: explicitly disabled via build config 00:01:35.352 test-mldev: explicitly disabled via build config 00:01:35.352 test-pipeline: explicitly disabled via build config 00:01:35.352 test-pmd: explicitly disabled via build config 00:01:35.352 test-regex: explicitly disabled via build config 00:01:35.352 test-sad: explicitly disabled via build config 00:01:35.352 test-security-perf: explicitly disabled via build config 00:01:35.352 00:01:35.352 libs: 00:01:35.352 argparse: explicitly disabled via build config 00:01:35.352 metrics: explicitly disabled via build config 00:01:35.352 acl: explicitly disabled via build config 00:01:35.352 bbdev: explicitly disabled via build config 00:01:35.352 bitratestats: explicitly disabled via build config 00:01:35.352 bpf: explicitly disabled via build config 00:01:35.352 cfgfile: explicitly disabled via build config 00:01:35.352 distributor: explicitly disabled via build config 00:01:35.352 efd: explicitly disabled via build config 00:01:35.352 eventdev: explicitly disabled via build config 00:01:35.352 dispatcher: explicitly disabled via build config 00:01:35.352 gpudev: explicitly disabled via build config 00:01:35.352 gro: explicitly disabled via build config 00:01:35.352 gso: explicitly disabled via build config 00:01:35.352 ip_frag: explicitly disabled via build config 00:01:35.352 jobstats: explicitly disabled via build config 00:01:35.352 latencystats: explicitly disabled via build config 00:01:35.352 lpm: explicitly disabled via build config 00:01:35.352 member: explicitly disabled via build config 00:01:35.352 pcapng: explicitly disabled via build config 00:01:35.352 rawdev: explicitly disabled via build config 00:01:35.352 regexdev: explicitly disabled via build config 00:01:35.352 mldev: explicitly disabled via build config 00:01:35.352 rib: explicitly disabled via build config 00:01:35.352 sched: explicitly disabled via build config 00:01:35.352 stack: explicitly disabled via build config 00:01:35.352 ipsec: explicitly disabled via build config 00:01:35.352 pdcp: explicitly disabled via build config 00:01:35.352 fib: explicitly disabled via build config 00:01:35.352 port: explicitly disabled via build config 00:01:35.352 pdump: explicitly disabled via build config 00:01:35.352 table: explicitly disabled via build config 00:01:35.352 pipeline: explicitly disabled via build config 00:01:35.352 graph: explicitly disabled via build config 00:01:35.352 node: explicitly disabled via build config 00:01:35.352 00:01:35.352 drivers: 00:01:35.352 common/cpt: not in enabled drivers build config 00:01:35.352 common/dpaax: not in enabled drivers build config 00:01:35.352 common/iavf: not in enabled drivers build config 00:01:35.352 common/idpf: not in enabled drivers build config 00:01:35.352 common/ionic: not in enabled drivers build config 00:01:35.352 common/mvep: not in enabled drivers build config 00:01:35.352 common/octeontx: not in enabled drivers build config 00:01:35.352 bus/auxiliary: not in enabled drivers build config 00:01:35.352 bus/cdx: not in enabled drivers build config 00:01:35.352 bus/dpaa: not in enabled drivers build config 00:01:35.352 bus/fslmc: not in enabled drivers build config 00:01:35.352 bus/ifpga: not in enabled drivers build config 00:01:35.352 bus/platform: not in enabled drivers build config 00:01:35.352 bus/uacce: not in enabled drivers build config 00:01:35.352 bus/vmbus: not in enabled drivers build config 00:01:35.352 common/cnxk: not in enabled drivers build config 00:01:35.352 common/mlx5: not in enabled drivers build config 00:01:35.352 common/nfp: not in enabled drivers build config 00:01:35.352 common/nitrox: not in enabled drivers build config 00:01:35.352 common/qat: not in enabled drivers build config 00:01:35.352 common/sfc_efx: not in enabled drivers build config 00:01:35.352 mempool/bucket: not in enabled drivers build config 00:01:35.352 mempool/cnxk: not in enabled drivers build config 00:01:35.352 mempool/dpaa: not in enabled drivers build config 00:01:35.352 mempool/dpaa2: not in enabled drivers build config 00:01:35.352 mempool/octeontx: not in enabled drivers build config 00:01:35.352 mempool/stack: not in enabled drivers build config 00:01:35.352 dma/cnxk: not in enabled drivers build config 00:01:35.352 dma/dpaa: not in enabled drivers build config 00:01:35.352 dma/dpaa2: not in enabled drivers build config 00:01:35.352 dma/hisilicon: not in enabled drivers build config 00:01:35.352 dma/idxd: not in enabled drivers build config 00:01:35.352 dma/ioat: not in enabled drivers build config 00:01:35.352 dma/skeleton: not in enabled drivers build config 00:01:35.352 net/af_packet: not in enabled drivers build config 00:01:35.352 net/af_xdp: not in enabled drivers build config 00:01:35.352 net/ark: not in enabled drivers build config 00:01:35.352 net/atlantic: not in enabled drivers build config 00:01:35.352 net/avp: not in enabled drivers build config 00:01:35.352 net/axgbe: not in enabled drivers build config 00:01:35.352 net/bnx2x: not in enabled drivers build config 00:01:35.352 net/bnxt: not in enabled drivers build config 00:01:35.352 net/bonding: not in enabled drivers build config 00:01:35.352 net/cnxk: not in enabled drivers build config 00:01:35.352 net/cpfl: not in enabled drivers build config 00:01:35.352 net/cxgbe: not in enabled drivers build config 00:01:35.352 net/dpaa: not in enabled drivers build config 00:01:35.352 net/dpaa2: not in enabled drivers build config 00:01:35.352 net/e1000: not in enabled drivers build config 00:01:35.352 net/ena: not in enabled drivers build config 00:01:35.352 net/enetc: not in enabled drivers build config 00:01:35.352 net/enetfec: not in enabled drivers build config 00:01:35.352 net/enic: not in enabled drivers build config 00:01:35.352 net/failsafe: not in enabled drivers build config 00:01:35.352 net/fm10k: not in enabled drivers build config 00:01:35.352 net/gve: not in enabled drivers build config 00:01:35.352 net/hinic: not in enabled drivers build config 00:01:35.352 net/hns3: not in enabled drivers build config 00:01:35.352 net/i40e: not in enabled drivers build config 00:01:35.352 net/iavf: not in enabled drivers build config 00:01:35.352 net/ice: not in enabled drivers build config 00:01:35.352 net/idpf: not in enabled drivers build config 00:01:35.352 net/igc: not in enabled drivers build config 00:01:35.352 net/ionic: not in enabled drivers build config 00:01:35.352 net/ipn3ke: not in enabled drivers build config 00:01:35.352 net/ixgbe: not in enabled drivers build config 00:01:35.352 net/mana: not in enabled drivers build config 00:01:35.352 net/memif: not in enabled drivers build config 00:01:35.352 net/mlx4: not in enabled drivers build config 00:01:35.352 net/mlx5: not in enabled drivers build config 00:01:35.352 net/mvneta: not in enabled drivers build config 00:01:35.352 net/mvpp2: not in enabled drivers build config 00:01:35.352 net/netvsc: not in enabled drivers build config 00:01:35.352 net/nfb: not in enabled drivers build config 00:01:35.352 net/nfp: not in enabled drivers build config 00:01:35.352 net/ngbe: not in enabled drivers build config 00:01:35.352 net/null: not in enabled drivers build config 00:01:35.352 net/octeontx: not in enabled drivers build config 00:01:35.352 net/octeon_ep: not in enabled drivers build config 00:01:35.352 net/pcap: not in enabled drivers build config 00:01:35.352 net/pfe: not in enabled drivers build config 00:01:35.352 net/qede: not in enabled drivers build config 00:01:35.352 net/ring: not in enabled drivers build config 00:01:35.352 net/sfc: not in enabled drivers build config 00:01:35.352 net/softnic: not in enabled drivers build config 00:01:35.352 net/tap: not in enabled drivers build config 00:01:35.352 net/thunderx: not in enabled drivers build config 00:01:35.352 net/txgbe: not in enabled drivers build config 00:01:35.352 net/vdev_netvsc: not in enabled drivers build config 00:01:35.352 net/vhost: not in enabled drivers build config 00:01:35.352 net/virtio: not in enabled drivers build config 00:01:35.352 net/vmxnet3: not in enabled drivers build config 00:01:35.352 raw/*: missing internal dependency, "rawdev" 00:01:35.352 crypto/armv8: not in enabled drivers build config 00:01:35.352 crypto/bcmfs: not in enabled drivers build config 00:01:35.352 crypto/caam_jr: not in enabled drivers build config 00:01:35.352 crypto/ccp: not in enabled drivers build config 00:01:35.352 crypto/cnxk: not in enabled drivers build config 00:01:35.352 crypto/dpaa_sec: not in enabled drivers build config 00:01:35.352 crypto/dpaa2_sec: not in enabled drivers build config 00:01:35.352 crypto/ipsec_mb: not in enabled drivers build config 00:01:35.352 crypto/mlx5: not in enabled drivers build config 00:01:35.352 crypto/mvsam: not in enabled drivers build config 00:01:35.352 crypto/nitrox: not in enabled drivers build config 00:01:35.352 crypto/null: not in enabled drivers build config 00:01:35.352 crypto/octeontx: not in enabled drivers build config 00:01:35.352 crypto/openssl: not in enabled drivers build config 00:01:35.352 crypto/scheduler: not in enabled drivers build config 00:01:35.352 crypto/uadk: not in enabled drivers build config 00:01:35.352 crypto/virtio: not in enabled drivers build config 00:01:35.352 compress/isal: not in enabled drivers build config 00:01:35.352 compress/mlx5: not in enabled drivers build config 00:01:35.352 compress/nitrox: not in enabled drivers build config 00:01:35.352 compress/octeontx: not in enabled drivers build config 00:01:35.352 compress/zlib: not in enabled drivers build config 00:01:35.352 regex/*: missing internal dependency, "regexdev" 00:01:35.352 ml/*: missing internal dependency, "mldev" 00:01:35.352 vdpa/ifc: not in enabled drivers build config 00:01:35.352 vdpa/mlx5: not in enabled drivers build config 00:01:35.352 vdpa/nfp: not in enabled drivers build config 00:01:35.352 vdpa/sfc: not in enabled drivers build config 00:01:35.352 event/*: missing internal dependency, "eventdev" 00:01:35.352 baseband/*: missing internal dependency, "bbdev" 00:01:35.352 gpu/*: missing internal dependency, "gpudev" 00:01:35.352 00:01:35.352 00:01:35.611 Build targets in project: 85 00:01:35.611 00:01:35.611 DPDK 24.03.0 00:01:35.611 00:01:35.611 User defined options 00:01:35.611 buildtype : debug 00:01:35.611 default_library : static 00:01:35.611 libdir : lib 00:01:35.611 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:35.611 c_args : -fPIC -Werror 00:01:35.611 c_link_args : 00:01:35.611 cpu_instruction_set: native 00:01:35.611 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:35.611 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:35.611 enable_docs : false 00:01:35.611 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:35.611 enable_kmods : false 00:01:35.611 max_lcores : 128 00:01:35.611 tests : false 00:01:35.611 00:01:35.611 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:35.869 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:01:36.133 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:36.133 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:36.133 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:36.133 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:36.133 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:36.133 [6/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:36.133 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:36.133 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:36.133 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:36.133 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:36.133 [11/268] Linking static target lib/librte_kvargs.a 00:01:36.133 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:36.133 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:36.133 [14/268] Linking static target lib/librte_log.a 00:01:36.133 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:36.133 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:36.133 [17/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:36.133 [18/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:36.133 [19/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:36.133 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:36.133 [21/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:36.133 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:36.134 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:36.134 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:36.134 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:36.134 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:36.134 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:36.134 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:36.134 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:36.134 [30/268] Linking static target lib/librte_pci.a 00:01:36.134 [31/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:36.134 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:36.391 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:36.391 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:36.391 [35/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:36.391 [36/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.391 [37/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.651 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:36.651 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:36.651 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:36.651 [41/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:36.651 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:36.651 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:36.651 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:36.651 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:36.651 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:36.651 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:36.651 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:36.651 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:36.651 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:36.651 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:36.651 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:36.651 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:36.651 [54/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:36.651 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:36.651 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:36.651 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:36.651 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:36.651 [59/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:36.651 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:36.651 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:36.651 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:36.651 [63/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:36.651 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:36.651 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:36.651 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:36.651 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:36.651 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:36.651 [69/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:36.651 [70/268] Linking static target lib/librte_telemetry.a 00:01:36.651 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:36.651 [72/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:36.651 [73/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:36.651 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:36.651 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:36.651 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:36.651 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:36.651 [78/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:36.651 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:36.651 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:36.651 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:36.651 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:36.651 [83/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:36.652 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:36.652 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:36.652 [86/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:36.652 [87/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:36.652 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:36.652 [89/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:36.652 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:36.652 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:36.652 [92/268] Linking static target lib/librte_meter.a 00:01:36.652 [93/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:36.652 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:36.652 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:36.652 [96/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:36.652 [97/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:36.652 [98/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:36.652 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:36.652 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:36.652 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:36.652 [102/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:36.652 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:36.652 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:36.652 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:36.652 [106/268] Linking static target lib/librte_ring.a 00:01:36.652 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:36.652 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:36.652 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:36.652 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:36.652 [111/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:36.652 [112/268] Linking static target lib/librte_cmdline.a 00:01:36.652 [113/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:36.652 [114/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:36.652 [115/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:36.652 [116/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:36.652 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:36.652 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:36.652 [119/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:36.652 [120/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:36.652 [121/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:36.652 [122/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:36.652 [123/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:36.652 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:36.652 [125/268] Linking static target lib/librte_timer.a 00:01:36.652 [126/268] Linking static target lib/librte_eal.a 00:01:36.652 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:36.652 [128/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:36.652 [129/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:36.652 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:36.652 [131/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:36.652 [132/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.652 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:36.652 [134/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:36.911 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:36.911 [136/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:36.911 [137/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:36.911 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:36.911 [139/268] Linking static target lib/librte_rcu.a 00:01:36.911 [140/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:36.911 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:36.911 [142/268] Linking static target lib/librte_net.a 00:01:36.911 [143/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:36.911 [144/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:36.911 [145/268] Linking static target lib/librte_mempool.a 00:01:36.911 [146/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:36.911 [147/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:36.911 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:36.911 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:36.911 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:36.911 [151/268] Linking target lib/librte_log.so.24.1 00:01:36.911 [152/268] Linking static target lib/librte_compressdev.a 00:01:36.911 [153/268] Linking static target lib/librte_dmadev.a 00:01:36.911 [154/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:36.911 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:36.911 [156/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:36.911 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:36.911 [158/268] Linking static target lib/librte_hash.a 00:01:36.911 [159/268] Linking static target lib/librte_mbuf.a 00:01:36.911 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:36.911 [161/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:36.911 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:36.911 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:36.911 [164/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.911 [165/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:36.911 [166/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:36.911 [167/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:36.911 [168/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:36.911 [169/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:36.911 [170/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:36.911 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:36.911 [172/268] Linking static target lib/librte_cryptodev.a 00:01:36.911 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:36.911 [174/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:36.911 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:37.171 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:37.171 [177/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.171 [178/268] Linking target lib/librte_kvargs.so.24.1 00:01:37.171 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:37.171 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:37.171 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:37.171 [182/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:37.171 [183/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:37.171 [184/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:37.171 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:37.171 [186/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:37.171 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:37.171 [188/268] Linking static target lib/librte_power.a 00:01:37.171 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:37.171 [190/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:37.171 [191/268] Linking static target lib/librte_security.a 00:01:37.171 [192/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:37.171 [193/268] Linking static target lib/librte_reorder.a 00:01:37.171 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:37.171 [195/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.171 [196/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:37.171 [197/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:37.171 [198/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.171 [199/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:37.171 [200/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:37.171 [201/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.171 [202/268] Linking static target drivers/librte_bus_vdev.a 00:01:37.171 [203/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.171 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:37.430 [205/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:37.430 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:37.430 [207/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:37.430 [208/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:37.430 [209/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:37.430 [210/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:37.430 [211/268] Linking target lib/librte_telemetry.so.24.1 00:01:37.430 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:37.430 [213/268] Linking static target drivers/librte_bus_pci.a 00:01:37.430 [214/268] Linking static target lib/librte_ethdev.a 00:01:37.430 [215/268] Linking static target drivers/librte_mempool_ring.a 00:01:37.430 [216/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:37.688 [217/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.688 [218/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.688 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.688 [220/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.688 [221/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.688 [222/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.688 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.945 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.945 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:37.945 [226/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.945 [227/268] Linking static target lib/librte_vhost.a 00:01:38.224 [228/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.224 [229/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.162 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.099 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.670 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.959 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.959 [234/268] Linking target lib/librte_eal.so.24.1 00:01:49.959 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:49.959 [236/268] Linking target lib/librte_meter.so.24.1 00:01:49.959 [237/268] Linking target lib/librte_ring.so.24.1 00:01:49.959 [238/268] Linking target lib/librte_pci.so.24.1 00:01:49.959 [239/268] Linking target lib/librte_timer.so.24.1 00:01:49.959 [240/268] Linking target lib/librte_dmadev.so.24.1 00:01:49.959 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:49.959 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:49.959 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:49.959 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:49.959 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:49.960 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:49.960 [247/268] Linking target lib/librte_rcu.so.24.1 00:01:49.960 [248/268] Linking target lib/librte_mempool.so.24.1 00:01:49.960 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:49.960 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:49.960 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:50.216 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:50.216 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:50.216 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:50.474 [255/268] Linking target lib/librte_compressdev.so.24.1 00:01:50.474 [256/268] Linking target lib/librte_reorder.so.24.1 00:01:50.474 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:01:50.474 [258/268] Linking target lib/librte_net.so.24.1 00:01:50.474 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:50.474 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:50.474 [261/268] Linking target lib/librte_cmdline.so.24.1 00:01:50.474 [262/268] Linking target lib/librte_security.so.24.1 00:01:50.474 [263/268] Linking target lib/librte_hash.so.24.1 00:01:50.474 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:50.733 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:50.733 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:50.733 [267/268] Linking target lib/librte_power.so.24.1 00:01:50.733 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:50.733 INFO: autodetecting backend as ninja 00:01:50.733 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 112 00:01:51.668 CC lib/ut/ut.o 00:01:51.668 CC lib/log/log.o 00:01:51.668 CC lib/log/log_flags.o 00:01:51.668 CC lib/log/log_deprecated.o 00:01:51.668 CC lib/ut_mock/mock.o 00:01:51.927 LIB libspdk_ut.a 00:01:51.927 LIB libspdk_log.a 00:01:51.927 LIB libspdk_ut_mock.a 00:01:52.186 CC lib/dma/dma.o 00:01:52.186 CXX lib/trace_parser/trace.o 00:01:52.186 CC lib/ioat/ioat.o 00:01:52.186 CC lib/util/base64.o 00:01:52.186 CC lib/util/bit_array.o 00:01:52.186 CC lib/util/cpuset.o 00:01:52.186 CC lib/util/crc16.o 00:01:52.186 CC lib/util/crc32c.o 00:01:52.186 CC lib/util/crc32.o 00:01:52.186 CC lib/util/crc32_ieee.o 00:01:52.186 CC lib/util/crc64.o 00:01:52.186 CC lib/util/dif.o 00:01:52.186 CC lib/util/fd.o 00:01:52.186 CC lib/util/file.o 00:01:52.186 CC lib/util/hexlify.o 00:01:52.186 CC lib/util/iov.o 00:01:52.186 CC lib/util/math.o 00:01:52.186 CC lib/util/pipe.o 00:01:52.186 CC lib/util/uuid.o 00:01:52.186 CC lib/util/strerror_tls.o 00:01:52.186 CC lib/util/string.o 00:01:52.186 CC lib/util/zipf.o 00:01:52.186 CC lib/util/fd_group.o 00:01:52.186 CC lib/util/xor.o 00:01:52.186 CC lib/vfio_user/host/vfio_user_pci.o 00:01:52.186 CC lib/vfio_user/host/vfio_user.o 00:01:52.444 LIB libspdk_dma.a 00:01:52.444 LIB libspdk_ioat.a 00:01:52.444 LIB libspdk_vfio_user.a 00:01:52.444 LIB libspdk_util.a 00:01:52.703 LIB libspdk_trace_parser.a 00:01:52.703 CC lib/vmd/vmd.o 00:01:52.703 CC lib/vmd/led.o 00:01:52.962 CC lib/rdma_utils/rdma_utils.o 00:01:52.962 CC lib/json/json_parse.o 00:01:52.962 CC lib/json/json_util.o 00:01:52.962 CC lib/json/json_write.o 00:01:52.962 CC lib/rdma_provider/common.o 00:01:52.962 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:52.962 CC lib/idxd/idxd.o 00:01:52.962 CC lib/idxd/idxd_user.o 00:01:52.962 CC lib/env_dpdk/env.o 00:01:52.962 CC lib/idxd/idxd_kernel.o 00:01:52.962 CC lib/env_dpdk/memory.o 00:01:52.962 CC lib/env_dpdk/threads.o 00:01:52.962 CC lib/env_dpdk/pci.o 00:01:52.962 CC lib/env_dpdk/init.o 00:01:52.962 CC lib/env_dpdk/pci_virtio.o 00:01:52.962 CC lib/env_dpdk/pci_ioat.o 00:01:52.962 CC lib/env_dpdk/pci_vmd.o 00:01:52.962 CC lib/conf/conf.o 00:01:52.962 CC lib/env_dpdk/pci_idxd.o 00:01:52.962 CC lib/env_dpdk/pci_event.o 00:01:52.962 CC lib/env_dpdk/sigbus_handler.o 00:01:52.962 CC lib/env_dpdk/pci_dpdk.o 00:01:52.962 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:52.962 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:52.962 LIB libspdk_rdma_provider.a 00:01:52.962 LIB libspdk_rdma_utils.a 00:01:52.962 LIB libspdk_conf.a 00:01:52.962 LIB libspdk_json.a 00:01:53.220 LIB libspdk_idxd.a 00:01:53.220 LIB libspdk_vmd.a 00:01:53.479 CC lib/jsonrpc/jsonrpc_server.o 00:01:53.479 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:53.479 CC lib/jsonrpc/jsonrpc_client.o 00:01:53.479 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:53.479 LIB libspdk_jsonrpc.a 00:01:53.738 LIB libspdk_env_dpdk.a 00:01:53.995 CC lib/rpc/rpc.o 00:01:53.995 LIB libspdk_rpc.a 00:01:54.253 CC lib/notify/notify.o 00:01:54.253 CC lib/trace/trace_flags.o 00:01:54.253 CC lib/trace/trace.o 00:01:54.253 CC lib/notify/notify_rpc.o 00:01:54.253 CC lib/trace/trace_rpc.o 00:01:54.253 CC lib/keyring/keyring.o 00:01:54.253 CC lib/keyring/keyring_rpc.o 00:01:54.512 LIB libspdk_notify.a 00:01:54.512 LIB libspdk_trace.a 00:01:54.512 LIB libspdk_keyring.a 00:01:54.770 CC lib/sock/sock.o 00:01:54.770 CC lib/sock/sock_rpc.o 00:01:54.770 CC lib/thread/iobuf.o 00:01:54.770 CC lib/thread/thread.o 00:01:55.029 LIB libspdk_sock.a 00:01:55.288 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:55.288 CC lib/nvme/nvme_ctrlr.o 00:01:55.288 CC lib/nvme/nvme_fabric.o 00:01:55.288 CC lib/nvme/nvme_ns_cmd.o 00:01:55.288 CC lib/nvme/nvme_ns.o 00:01:55.288 CC lib/nvme/nvme_pcie_common.o 00:01:55.548 CC lib/nvme/nvme_pcie.o 00:01:55.548 CC lib/nvme/nvme_qpair.o 00:01:55.548 CC lib/nvme/nvme.o 00:01:55.548 CC lib/nvme/nvme_quirks.o 00:01:55.548 CC lib/nvme/nvme_transport.o 00:01:55.548 CC lib/nvme/nvme_discovery.o 00:01:55.548 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:55.548 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:55.548 CC lib/nvme/nvme_tcp.o 00:01:55.548 CC lib/nvme/nvme_poll_group.o 00:01:55.548 CC lib/nvme/nvme_opal.o 00:01:55.548 CC lib/nvme/nvme_io_msg.o 00:01:55.548 CC lib/nvme/nvme_zns.o 00:01:55.548 CC lib/nvme/nvme_stubs.o 00:01:55.548 CC lib/nvme/nvme_auth.o 00:01:55.548 CC lib/nvme/nvme_vfio_user.o 00:01:55.548 CC lib/nvme/nvme_cuse.o 00:01:55.548 CC lib/nvme/nvme_rdma.o 00:01:55.548 LIB libspdk_thread.a 00:01:55.805 CC lib/vfu_tgt/tgt_endpoint.o 00:01:55.805 CC lib/vfu_tgt/tgt_rpc.o 00:01:55.805 CC lib/accel/accel.o 00:01:55.805 CC lib/accel/accel_rpc.o 00:01:55.805 CC lib/accel/accel_sw.o 00:01:55.805 CC lib/init/json_config.o 00:01:55.805 CC lib/init/subsystem.o 00:01:55.805 CC lib/init/subsystem_rpc.o 00:01:55.805 CC lib/init/rpc.o 00:01:55.805 CC lib/blob/blobstore.o 00:01:55.805 CC lib/blob/request.o 00:01:55.805 CC lib/blob/zeroes.o 00:01:55.805 CC lib/blob/blob_bs_dev.o 00:01:55.805 CC lib/virtio/virtio_vhost_user.o 00:01:55.805 CC lib/virtio/virtio.o 00:01:55.805 CC lib/virtio/virtio_vfio_user.o 00:01:55.805 CC lib/virtio/virtio_pci.o 00:01:56.064 LIB libspdk_init.a 00:01:56.064 LIB libspdk_vfu_tgt.a 00:01:56.064 LIB libspdk_virtio.a 00:01:56.323 CC lib/event/app.o 00:01:56.323 CC lib/event/reactor.o 00:01:56.323 CC lib/event/log_rpc.o 00:01:56.323 CC lib/event/app_rpc.o 00:01:56.323 CC lib/event/scheduler_static.o 00:01:56.582 LIB libspdk_accel.a 00:01:56.582 LIB libspdk_event.a 00:01:56.582 LIB libspdk_nvme.a 00:01:56.841 CC lib/bdev/bdev.o 00:01:56.841 CC lib/bdev/bdev_zone.o 00:01:56.841 CC lib/bdev/bdev_rpc.o 00:01:56.841 CC lib/bdev/scsi_nvme.o 00:01:56.841 CC lib/bdev/part.o 00:01:57.780 LIB libspdk_blob.a 00:01:58.039 CC lib/lvol/lvol.o 00:01:58.039 CC lib/blobfs/tree.o 00:01:58.039 CC lib/blobfs/blobfs.o 00:01:58.297 LIB libspdk_lvol.a 00:01:58.297 LIB libspdk_blobfs.a 00:01:58.557 LIB libspdk_bdev.a 00:01:58.815 CC lib/nvmf/ctrlr.o 00:01:58.815 CC lib/nvmf/ctrlr_discovery.o 00:01:58.815 CC lib/nvmf/ctrlr_bdev.o 00:01:58.815 CC lib/nvmf/subsystem.o 00:01:58.815 CC lib/nvmf/nvmf.o 00:01:58.815 CC lib/nvmf/nvmf_rpc.o 00:01:58.816 CC lib/nvmf/transport.o 00:01:58.816 CC lib/nvmf/tcp.o 00:01:58.816 CC lib/nvmf/stubs.o 00:01:58.816 CC lib/nvmf/mdns_server.o 00:01:58.816 CC lib/nvmf/vfio_user.o 00:01:58.816 CC lib/nvmf/rdma.o 00:01:58.816 CC lib/nvmf/auth.o 00:01:58.816 CC lib/ftl/ftl_core.o 00:01:58.816 CC lib/ftl/ftl_init.o 00:01:58.816 CC lib/ftl/ftl_layout.o 00:01:58.816 CC lib/ftl/ftl_debug.o 00:01:58.816 CC lib/scsi/lun.o 00:01:59.074 CC lib/ftl/ftl_io.o 00:01:59.074 CC lib/scsi/dev.o 00:01:59.074 CC lib/nbd/nbd.o 00:01:59.074 CC lib/ftl/ftl_sb.o 00:01:59.074 CC lib/scsi/port.o 00:01:59.074 CC lib/nbd/nbd_rpc.o 00:01:59.074 CC lib/ftl/ftl_l2p.o 00:01:59.074 CC lib/ftl/ftl_l2p_flat.o 00:01:59.074 CC lib/ftl/ftl_nv_cache.o 00:01:59.074 CC lib/scsi/scsi.o 00:01:59.074 CC lib/ftl/ftl_band.o 00:01:59.074 CC lib/ublk/ublk.o 00:01:59.074 CC lib/scsi/scsi_bdev.o 00:01:59.074 CC lib/ftl/ftl_band_ops.o 00:01:59.074 CC lib/ublk/ublk_rpc.o 00:01:59.074 CC lib/ftl/ftl_writer.o 00:01:59.074 CC lib/scsi/scsi_pr.o 00:01:59.074 CC lib/ftl/ftl_rq.o 00:01:59.074 CC lib/scsi/scsi_rpc.o 00:01:59.074 CC lib/ftl/ftl_reloc.o 00:01:59.074 CC lib/ftl/ftl_l2p_cache.o 00:01:59.074 CC lib/scsi/task.o 00:01:59.074 CC lib/ftl/ftl_p2l.o 00:01:59.074 CC lib/ftl/mngt/ftl_mngt.o 00:01:59.074 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:59.074 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:59.074 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:59.074 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:59.074 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:59.074 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:59.074 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:59.074 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:59.074 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:59.074 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:59.074 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:59.074 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:59.074 CC lib/ftl/utils/ftl_conf.o 00:01:59.074 CC lib/ftl/utils/ftl_mempool.o 00:01:59.074 CC lib/ftl/utils/ftl_md.o 00:01:59.075 CC lib/ftl/utils/ftl_bitmap.o 00:01:59.075 CC lib/ftl/utils/ftl_property.o 00:01:59.075 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:59.075 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:59.075 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:59.075 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:59.075 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:59.075 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:59.075 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:59.075 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:59.075 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:59.075 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:59.075 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:59.075 CC lib/ftl/base/ftl_base_dev.o 00:01:59.075 CC lib/ftl/base/ftl_base_bdev.o 00:01:59.075 CC lib/ftl/ftl_trace.o 00:01:59.334 LIB libspdk_nbd.a 00:01:59.334 LIB libspdk_ublk.a 00:01:59.334 LIB libspdk_scsi.a 00:01:59.593 LIB libspdk_ftl.a 00:01:59.593 CC lib/iscsi/init_grp.o 00:01:59.593 CC lib/iscsi/conn.o 00:01:59.593 CC lib/iscsi/iscsi.o 00:01:59.593 CC lib/iscsi/md5.o 00:01:59.593 CC lib/iscsi/param.o 00:01:59.593 CC lib/iscsi/tgt_node.o 00:01:59.593 CC lib/iscsi/portal_grp.o 00:01:59.593 CC lib/iscsi/iscsi_subsystem.o 00:01:59.593 CC lib/iscsi/iscsi_rpc.o 00:01:59.593 CC lib/iscsi/task.o 00:01:59.593 CC lib/vhost/vhost.o 00:01:59.593 CC lib/vhost/vhost_rpc.o 00:01:59.593 CC lib/vhost/rte_vhost_user.o 00:01:59.593 CC lib/vhost/vhost_scsi.o 00:01:59.593 CC lib/vhost/vhost_blk.o 00:02:00.160 LIB libspdk_nvmf.a 00:02:00.419 LIB libspdk_vhost.a 00:02:00.419 LIB libspdk_iscsi.a 00:02:00.988 CC module/env_dpdk/env_dpdk_rpc.o 00:02:00.988 CC module/vfu_device/vfu_virtio.o 00:02:00.988 CC module/vfu_device/vfu_virtio_blk.o 00:02:00.988 CC module/vfu_device/vfu_virtio_scsi.o 00:02:00.988 CC module/vfu_device/vfu_virtio_rpc.o 00:02:00.988 CC module/accel/dsa/accel_dsa.o 00:02:00.988 CC module/accel/dsa/accel_dsa_rpc.o 00:02:00.988 CC module/accel/iaa/accel_iaa.o 00:02:00.988 CC module/accel/error/accel_error.o 00:02:00.988 CC module/accel/iaa/accel_iaa_rpc.o 00:02:00.988 CC module/accel/error/accel_error_rpc.o 00:02:00.988 CC module/keyring/linux/keyring_rpc.o 00:02:00.988 CC module/keyring/linux/keyring.o 00:02:00.988 LIB libspdk_env_dpdk_rpc.a 00:02:00.988 CC module/keyring/file/keyring_rpc.o 00:02:00.988 CC module/accel/ioat/accel_ioat.o 00:02:00.988 CC module/keyring/file/keyring.o 00:02:00.988 CC module/accel/ioat/accel_ioat_rpc.o 00:02:00.988 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:00.988 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:00.988 CC module/sock/posix/posix.o 00:02:00.988 CC module/scheduler/gscheduler/gscheduler.o 00:02:00.988 CC module/blob/bdev/blob_bdev.o 00:02:01.248 LIB libspdk_keyring_linux.a 00:02:01.248 LIB libspdk_keyring_file.a 00:02:01.248 LIB libspdk_accel_error.a 00:02:01.248 LIB libspdk_scheduler_dpdk_governor.a 00:02:01.248 LIB libspdk_scheduler_gscheduler.a 00:02:01.248 LIB libspdk_accel_iaa.a 00:02:01.248 LIB libspdk_scheduler_dynamic.a 00:02:01.248 LIB libspdk_accel_ioat.a 00:02:01.248 LIB libspdk_accel_dsa.a 00:02:01.248 LIB libspdk_blob_bdev.a 00:02:01.248 LIB libspdk_vfu_device.a 00:02:01.506 LIB libspdk_sock_posix.a 00:02:01.764 CC module/bdev/aio/bdev_aio.o 00:02:01.764 CC module/bdev/aio/bdev_aio_rpc.o 00:02:01.764 CC module/blobfs/bdev/blobfs_bdev.o 00:02:01.764 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:01.764 CC module/bdev/split/vbdev_split.o 00:02:01.764 CC module/bdev/split/vbdev_split_rpc.o 00:02:01.764 CC module/bdev/raid/bdev_raid.o 00:02:01.764 CC module/bdev/delay/vbdev_delay.o 00:02:01.764 CC module/bdev/raid/bdev_raid_rpc.o 00:02:01.764 CC module/bdev/raid/bdev_raid_sb.o 00:02:01.764 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:01.764 CC module/bdev/raid/raid0.o 00:02:01.764 CC module/bdev/raid/raid1.o 00:02:01.764 CC module/bdev/raid/concat.o 00:02:01.764 CC module/bdev/ftl/bdev_ftl.o 00:02:01.764 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:01.764 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:01.764 CC module/bdev/malloc/bdev_malloc.o 00:02:01.764 CC module/bdev/error/vbdev_error.o 00:02:01.764 CC module/bdev/iscsi/bdev_iscsi.o 00:02:01.764 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:01.764 CC module/bdev/error/vbdev_error_rpc.o 00:02:01.764 CC module/bdev/lvol/vbdev_lvol.o 00:02:01.764 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:01.764 CC module/bdev/gpt/gpt.o 00:02:01.764 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:01.764 CC module/bdev/gpt/vbdev_gpt.o 00:02:01.764 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:01.764 CC module/bdev/null/bdev_null.o 00:02:01.764 CC module/bdev/null/bdev_null_rpc.o 00:02:01.764 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:01.764 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:01.764 CC module/bdev/passthru/vbdev_passthru.o 00:02:01.764 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:01.764 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:01.764 CC module/bdev/nvme/bdev_nvme.o 00:02:01.764 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:01.764 CC module/bdev/nvme/nvme_rpc.o 00:02:01.764 CC module/bdev/nvme/bdev_mdns_client.o 00:02:01.764 CC module/bdev/nvme/vbdev_opal.o 00:02:01.764 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:01.764 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:01.764 LIB libspdk_blobfs_bdev.a 00:02:01.765 LIB libspdk_bdev_split.a 00:02:01.765 LIB libspdk_bdev_error.a 00:02:02.023 LIB libspdk_bdev_ftl.a 00:02:02.023 LIB libspdk_bdev_aio.a 00:02:02.023 LIB libspdk_bdev_null.a 00:02:02.023 LIB libspdk_bdev_gpt.a 00:02:02.023 LIB libspdk_bdev_delay.a 00:02:02.023 LIB libspdk_bdev_passthru.a 00:02:02.023 LIB libspdk_bdev_iscsi.a 00:02:02.023 LIB libspdk_bdev_zone_block.a 00:02:02.023 LIB libspdk_bdev_malloc.a 00:02:02.023 LIB libspdk_bdev_lvol.a 00:02:02.023 LIB libspdk_bdev_virtio.a 00:02:02.282 LIB libspdk_bdev_raid.a 00:02:02.849 LIB libspdk_bdev_nvme.a 00:02:03.788 CC module/event/subsystems/keyring/keyring.o 00:02:03.788 CC module/event/subsystems/sock/sock.o 00:02:03.788 CC module/event/subsystems/iobuf/iobuf.o 00:02:03.788 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:03.788 CC module/event/subsystems/vmd/vmd.o 00:02:03.788 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:03.788 CC module/event/subsystems/scheduler/scheduler.o 00:02:03.788 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:03.788 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:03.788 LIB libspdk_event_keyring.a 00:02:03.788 LIB libspdk_event_sock.a 00:02:03.788 LIB libspdk_event_scheduler.a 00:02:03.788 LIB libspdk_event_vmd.a 00:02:03.788 LIB libspdk_event_iobuf.a 00:02:03.788 LIB libspdk_event_vhost_blk.a 00:02:03.788 LIB libspdk_event_vfu_tgt.a 00:02:04.052 CC module/event/subsystems/accel/accel.o 00:02:04.052 LIB libspdk_event_accel.a 00:02:04.370 CC module/event/subsystems/bdev/bdev.o 00:02:04.629 LIB libspdk_event_bdev.a 00:02:04.888 CC module/event/subsystems/ublk/ublk.o 00:02:04.888 CC module/event/subsystems/scsi/scsi.o 00:02:04.888 CC module/event/subsystems/nbd/nbd.o 00:02:04.888 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:04.888 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:05.147 LIB libspdk_event_ublk.a 00:02:05.147 LIB libspdk_event_scsi.a 00:02:05.147 LIB libspdk_event_nbd.a 00:02:05.147 LIB libspdk_event_nvmf.a 00:02:05.407 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:05.407 CC module/event/subsystems/iscsi/iscsi.o 00:02:05.407 LIB libspdk_event_vhost_scsi.a 00:02:05.407 LIB libspdk_event_iscsi.a 00:02:05.665 CC app/spdk_nvme_discover/discovery_aer.o 00:02:05.665 CC app/spdk_top/spdk_top.o 00:02:05.665 TEST_HEADER include/spdk/accel.h 00:02:05.665 CXX app/trace/trace.o 00:02:05.665 TEST_HEADER include/spdk/assert.h 00:02:05.665 TEST_HEADER include/spdk/accel_module.h 00:02:05.665 TEST_HEADER include/spdk/bdev_module.h 00:02:05.665 TEST_HEADER include/spdk/bdev.h 00:02:05.665 TEST_HEADER include/spdk/barrier.h 00:02:05.665 TEST_HEADER include/spdk/base64.h 00:02:05.665 CC app/trace_record/trace_record.o 00:02:05.665 TEST_HEADER include/spdk/bdev_zone.h 00:02:05.665 CC app/spdk_nvme_perf/perf.o 00:02:05.665 TEST_HEADER include/spdk/bit_array.h 00:02:05.665 TEST_HEADER include/spdk/bit_pool.h 00:02:05.665 CC app/spdk_nvme_identify/identify.o 00:02:05.665 TEST_HEADER include/spdk/blob_bdev.h 00:02:05.665 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:05.665 TEST_HEADER include/spdk/blobfs.h 00:02:05.665 TEST_HEADER include/spdk/config.h 00:02:05.665 TEST_HEADER include/spdk/blob.h 00:02:05.665 TEST_HEADER include/spdk/conf.h 00:02:05.665 TEST_HEADER include/spdk/cpuset.h 00:02:05.665 TEST_HEADER include/spdk/crc16.h 00:02:05.665 TEST_HEADER include/spdk/crc64.h 00:02:05.665 CC app/spdk_lspci/spdk_lspci.o 00:02:05.665 TEST_HEADER include/spdk/crc32.h 00:02:05.665 CC test/rpc_client/rpc_client_test.o 00:02:05.665 TEST_HEADER include/spdk/endian.h 00:02:05.665 TEST_HEADER include/spdk/dma.h 00:02:05.665 TEST_HEADER include/spdk/dif.h 00:02:05.925 TEST_HEADER include/spdk/env_dpdk.h 00:02:05.925 TEST_HEADER include/spdk/fd_group.h 00:02:05.925 TEST_HEADER include/spdk/env.h 00:02:05.925 TEST_HEADER include/spdk/event.h 00:02:05.925 TEST_HEADER include/spdk/fd.h 00:02:05.925 TEST_HEADER include/spdk/file.h 00:02:05.925 TEST_HEADER include/spdk/ftl.h 00:02:05.925 TEST_HEADER include/spdk/gpt_spec.h 00:02:05.925 TEST_HEADER include/spdk/hexlify.h 00:02:05.925 TEST_HEADER include/spdk/histogram_data.h 00:02:05.925 TEST_HEADER include/spdk/idxd.h 00:02:05.925 TEST_HEADER include/spdk/idxd_spec.h 00:02:05.925 TEST_HEADER include/spdk/init.h 00:02:05.925 TEST_HEADER include/spdk/ioat.h 00:02:05.925 TEST_HEADER include/spdk/iscsi_spec.h 00:02:05.925 TEST_HEADER include/spdk/ioat_spec.h 00:02:05.925 CC app/spdk_dd/spdk_dd.o 00:02:05.925 TEST_HEADER include/spdk/json.h 00:02:05.925 TEST_HEADER include/spdk/keyring.h 00:02:05.925 TEST_HEADER include/spdk/keyring_module.h 00:02:05.925 TEST_HEADER include/spdk/jsonrpc.h 00:02:05.925 CC app/nvmf_tgt/nvmf_main.o 00:02:05.925 TEST_HEADER include/spdk/log.h 00:02:05.925 TEST_HEADER include/spdk/likely.h 00:02:05.925 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:05.925 TEST_HEADER include/spdk/lvol.h 00:02:05.925 TEST_HEADER include/spdk/mmio.h 00:02:05.925 TEST_HEADER include/spdk/nbd.h 00:02:05.925 TEST_HEADER include/spdk/memory.h 00:02:05.925 TEST_HEADER include/spdk/notify.h 00:02:05.925 TEST_HEADER include/spdk/nvme.h 00:02:05.925 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:05.925 TEST_HEADER include/spdk/nvme_intel.h 00:02:05.925 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:05.925 TEST_HEADER include/spdk/nvme_spec.h 00:02:05.925 TEST_HEADER include/spdk/nvme_zns.h 00:02:05.925 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:05.925 TEST_HEADER include/spdk/nvmf.h 00:02:05.925 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:05.925 TEST_HEADER include/spdk/nvmf_transport.h 00:02:05.925 TEST_HEADER include/spdk/nvmf_spec.h 00:02:05.925 TEST_HEADER include/spdk/opal_spec.h 00:02:05.925 TEST_HEADER include/spdk/opal.h 00:02:05.925 TEST_HEADER include/spdk/pipe.h 00:02:05.925 TEST_HEADER include/spdk/queue.h 00:02:05.925 TEST_HEADER include/spdk/pci_ids.h 00:02:05.925 TEST_HEADER include/spdk/rpc.h 00:02:05.925 TEST_HEADER include/spdk/scheduler.h 00:02:05.925 TEST_HEADER include/spdk/reduce.h 00:02:05.925 TEST_HEADER include/spdk/scsi_spec.h 00:02:05.925 TEST_HEADER include/spdk/sock.h 00:02:05.925 TEST_HEADER include/spdk/string.h 00:02:05.926 TEST_HEADER include/spdk/thread.h 00:02:05.926 TEST_HEADER include/spdk/scsi.h 00:02:05.926 TEST_HEADER include/spdk/trace.h 00:02:05.926 CC app/spdk_tgt/spdk_tgt.o 00:02:05.926 TEST_HEADER include/spdk/stdinc.h 00:02:05.926 TEST_HEADER include/spdk/tree.h 00:02:05.926 TEST_HEADER include/spdk/ublk.h 00:02:05.926 TEST_HEADER include/spdk/trace_parser.h 00:02:05.926 TEST_HEADER include/spdk/uuid.h 00:02:05.926 TEST_HEADER include/spdk/util.h 00:02:05.926 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:05.926 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:05.926 TEST_HEADER include/spdk/version.h 00:02:05.926 TEST_HEADER include/spdk/vmd.h 00:02:05.926 TEST_HEADER include/spdk/vhost.h 00:02:05.926 TEST_HEADER include/spdk/xor.h 00:02:05.926 TEST_HEADER include/spdk/zipf.h 00:02:05.926 CC app/iscsi_tgt/iscsi_tgt.o 00:02:05.926 CXX test/cpp_headers/accel.o 00:02:05.926 CXX test/cpp_headers/assert.o 00:02:05.926 CXX test/cpp_headers/accel_module.o 00:02:05.926 CXX test/cpp_headers/bdev.o 00:02:05.926 CXX test/cpp_headers/base64.o 00:02:05.926 CXX test/cpp_headers/barrier.o 00:02:05.926 CXX test/cpp_headers/bdev_module.o 00:02:05.926 CXX test/cpp_headers/bdev_zone.o 00:02:05.926 CXX test/cpp_headers/blob_bdev.o 00:02:05.926 CXX test/cpp_headers/bit_array.o 00:02:05.926 CXX test/cpp_headers/blobfs.o 00:02:05.926 CXX test/cpp_headers/blobfs_bdev.o 00:02:05.926 CXX test/cpp_headers/bit_pool.o 00:02:05.926 CXX test/cpp_headers/blob.o 00:02:05.926 CXX test/cpp_headers/conf.o 00:02:05.926 CXX test/cpp_headers/config.o 00:02:05.926 CXX test/cpp_headers/cpuset.o 00:02:05.926 CXX test/cpp_headers/crc16.o 00:02:05.926 CXX test/cpp_headers/crc32.o 00:02:05.926 CXX test/cpp_headers/dif.o 00:02:05.926 CXX test/cpp_headers/crc64.o 00:02:05.926 CXX test/cpp_headers/endian.o 00:02:05.926 CXX test/cpp_headers/env_dpdk.o 00:02:05.926 CXX test/cpp_headers/dma.o 00:02:05.926 CXX test/cpp_headers/event.o 00:02:05.926 CXX test/cpp_headers/env.o 00:02:05.926 CXX test/cpp_headers/fd_group.o 00:02:05.926 CXX test/cpp_headers/file.o 00:02:05.926 CXX test/cpp_headers/fd.o 00:02:05.926 CXX test/cpp_headers/ftl.o 00:02:05.926 CXX test/cpp_headers/gpt_spec.o 00:02:05.926 CXX test/cpp_headers/hexlify.o 00:02:05.926 CXX test/cpp_headers/histogram_data.o 00:02:05.926 CXX test/cpp_headers/idxd.o 00:02:05.926 CXX test/cpp_headers/idxd_spec.o 00:02:05.926 CXX test/cpp_headers/init.o 00:02:05.926 CXX test/cpp_headers/ioat.o 00:02:05.926 CXX test/cpp_headers/ioat_spec.o 00:02:05.926 CXX test/cpp_headers/iscsi_spec.o 00:02:05.926 CXX test/cpp_headers/json.o 00:02:05.926 CXX test/cpp_headers/jsonrpc.o 00:02:05.926 CXX test/cpp_headers/keyring.o 00:02:05.926 CXX test/cpp_headers/likely.o 00:02:05.926 CXX test/cpp_headers/keyring_module.o 00:02:05.926 CXX test/cpp_headers/log.o 00:02:05.926 CXX test/cpp_headers/lvol.o 00:02:05.926 CXX test/cpp_headers/memory.o 00:02:05.926 CXX test/cpp_headers/mmio.o 00:02:05.926 CXX test/cpp_headers/nbd.o 00:02:05.926 CC app/fio/nvme/fio_plugin.o 00:02:05.926 CXX test/cpp_headers/notify.o 00:02:05.926 CXX test/cpp_headers/nvme.o 00:02:05.926 CXX test/cpp_headers/nvme_intel.o 00:02:05.926 CXX test/cpp_headers/nvme_ocssd.o 00:02:05.926 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:05.926 CC examples/ioat/perf/perf.o 00:02:05.926 CXX test/cpp_headers/nvme_spec.o 00:02:05.926 CXX test/cpp_headers/nvme_zns.o 00:02:05.926 CXX test/cpp_headers/nvmf_cmd.o 00:02:05.926 CXX test/cpp_headers/nvmf.o 00:02:05.926 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:05.926 CC test/thread/poller_perf/poller_perf.o 00:02:05.926 CXX test/cpp_headers/nvmf_spec.o 00:02:05.926 CXX test/cpp_headers/nvmf_transport.o 00:02:05.926 CXX test/cpp_headers/opal.o 00:02:05.926 CXX test/cpp_headers/pci_ids.o 00:02:05.926 CXX test/cpp_headers/opal_spec.o 00:02:05.926 CXX test/cpp_headers/pipe.o 00:02:05.926 CXX test/cpp_headers/reduce.o 00:02:05.926 CXX test/cpp_headers/queue.o 00:02:05.926 CXX test/cpp_headers/scheduler.o 00:02:05.926 CXX test/cpp_headers/rpc.o 00:02:05.926 CXX test/cpp_headers/scsi.o 00:02:05.926 CXX test/cpp_headers/scsi_spec.o 00:02:05.926 CXX test/cpp_headers/sock.o 00:02:05.926 CXX test/cpp_headers/stdinc.o 00:02:05.926 CXX test/cpp_headers/string.o 00:02:05.926 CC examples/ioat/verify/verify.o 00:02:05.926 CC test/thread/lock/spdk_lock.o 00:02:05.926 CXX test/cpp_headers/thread.o 00:02:05.926 CXX test/cpp_headers/trace_parser.o 00:02:05.926 CXX test/cpp_headers/trace.o 00:02:05.926 CXX test/cpp_headers/tree.o 00:02:05.926 CXX test/cpp_headers/ublk.o 00:02:05.926 CXX test/cpp_headers/util.o 00:02:05.926 CC test/env/vtophys/vtophys.o 00:02:05.926 CC examples/util/zipf/zipf.o 00:02:05.926 CC test/app/jsoncat/jsoncat.o 00:02:05.926 CC test/env/pci/pci_ut.o 00:02:05.926 LINK spdk_lspci 00:02:05.926 CC test/app/histogram_perf/histogram_perf.o 00:02:05.926 CC test/env/memory/memory_ut.o 00:02:05.926 CC app/fio/bdev/fio_plugin.o 00:02:05.926 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:05.926 CC test/app/stub/stub.o 00:02:05.926 CC test/dma/test_dma/test_dma.o 00:02:05.926 LINK spdk_nvme_discover 00:02:05.926 LINK rpc_client_test 00:02:05.926 CXX test/cpp_headers/uuid.o 00:02:05.926 CC test/app/bdev_svc/bdev_svc.o 00:02:05.926 LINK interrupt_tgt 00:02:05.926 LINK spdk_trace_record 00:02:05.926 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:06.186 LINK nvmf_tgt 00:02:06.186 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:06.186 CC test/env/mem_callbacks/mem_callbacks.o 00:02:06.186 CXX test/cpp_headers/version.o 00:02:06.186 CXX test/cpp_headers/vfio_user_pci.o 00:02:06.186 CXX test/cpp_headers/vfio_user_spec.o 00:02:06.186 CXX test/cpp_headers/vmd.o 00:02:06.186 CXX test/cpp_headers/vhost.o 00:02:06.186 CXX test/cpp_headers/xor.o 00:02:06.186 CXX test/cpp_headers/zipf.o 00:02:06.186 LINK poller_perf 00:02:06.186 LINK jsoncat 00:02:06.186 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:06.186 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:06.186 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:06.186 LINK vtophys 00:02:06.186 LINK zipf 00:02:06.186 fio_plugin.c:1582:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:02:06.186 struct spdk_nvme_fdp_ruhs ruhs; 00:02:06.186 ^ 00:02:06.186 LINK histogram_perf 00:02:06.186 LINK iscsi_tgt 00:02:06.186 LINK spdk_tgt 00:02:06.186 LINK env_dpdk_post_init 00:02:06.186 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:06.186 LINK ioat_perf 00:02:06.186 LINK verify 00:02:06.186 LINK stub 00:02:06.186 LINK bdev_svc 00:02:06.186 LINK spdk_trace 00:02:06.186 LINK spdk_dd 00:02:06.445 LINK pci_ut 00:02:06.445 LINK test_dma 00:02:06.445 1 warning generated. 00:02:06.445 LINK llvm_vfio_fuzz 00:02:06.445 LINK spdk_nvme 00:02:06.445 LINK spdk_nvme_identify 00:02:06.445 LINK vhost_fuzz 00:02:06.445 LINK spdk_bdev 00:02:06.445 LINK nvme_fuzz 00:02:06.445 LINK spdk_nvme_perf 00:02:06.445 LINK mem_callbacks 00:02:06.445 LINK spdk_top 00:02:06.704 LINK llvm_nvme_fuzz 00:02:06.704 CC app/vhost/vhost.o 00:02:06.704 CC examples/idxd/perf/perf.o 00:02:06.704 CC examples/vmd/led/led.o 00:02:06.704 CC examples/vmd/lsvmd/lsvmd.o 00:02:06.704 CC examples/thread/thread/thread_ex.o 00:02:06.704 CC examples/sock/hello_world/hello_sock.o 00:02:06.704 LINK memory_ut 00:02:06.963 LINK led 00:02:06.963 LINK lsvmd 00:02:06.963 LINK vhost 00:02:06.963 LINK idxd_perf 00:02:06.963 LINK hello_sock 00:02:06.963 LINK thread 00:02:06.963 LINK spdk_lock 00:02:07.221 LINK iscsi_fuzz 00:02:07.786 CC test/event/reactor/reactor.o 00:02:07.786 CC test/event/event_perf/event_perf.o 00:02:07.787 CC test/event/reactor_perf/reactor_perf.o 00:02:07.787 CC examples/nvme/hello_world/hello_world.o 00:02:07.787 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:07.787 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:07.787 CC examples/nvme/reconnect/reconnect.o 00:02:07.787 CC test/event/app_repeat/app_repeat.o 00:02:07.787 CC examples/nvme/arbitration/arbitration.o 00:02:07.787 CC examples/nvme/hotplug/hotplug.o 00:02:07.787 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:07.787 CC examples/nvme/abort/abort.o 00:02:07.787 CC test/event/scheduler/scheduler.o 00:02:07.787 LINK reactor 00:02:07.787 LINK reactor_perf 00:02:07.787 LINK event_perf 00:02:07.787 LINK cmb_copy 00:02:07.787 LINK app_repeat 00:02:07.787 LINK pmr_persistence 00:02:07.787 LINK hello_world 00:02:07.787 LINK hotplug 00:02:07.787 LINK scheduler 00:02:07.787 LINK reconnect 00:02:08.045 LINK arbitration 00:02:08.045 LINK abort 00:02:08.045 LINK nvme_manage 00:02:08.045 CC test/nvme/sgl/sgl.o 00:02:08.045 CC test/nvme/reset/reset.o 00:02:08.045 CC test/nvme/reserve/reserve.o 00:02:08.045 CC test/nvme/e2edp/nvme_dp.o 00:02:08.045 CC test/nvme/err_injection/err_injection.o 00:02:08.045 CC test/blobfs/mkfs/mkfs.o 00:02:08.045 CC test/nvme/aer/aer.o 00:02:08.045 CC test/nvme/compliance/nvme_compliance.o 00:02:08.045 CC test/nvme/boot_partition/boot_partition.o 00:02:08.045 CC test/nvme/fused_ordering/fused_ordering.o 00:02:08.045 CC test/nvme/startup/startup.o 00:02:08.045 CC test/nvme/connect_stress/connect_stress.o 00:02:08.045 CC test/nvme/simple_copy/simple_copy.o 00:02:08.045 CC test/nvme/fdp/fdp.o 00:02:08.045 CC test/nvme/overhead/overhead.o 00:02:08.045 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:08.045 CC test/accel/dif/dif.o 00:02:08.045 CC test/nvme/cuse/cuse.o 00:02:08.303 CC test/lvol/esnap/esnap.o 00:02:08.303 LINK boot_partition 00:02:08.303 LINK startup 00:02:08.303 LINK err_injection 00:02:08.303 LINK reserve 00:02:08.303 LINK connect_stress 00:02:08.303 LINK fused_ordering 00:02:08.303 LINK doorbell_aers 00:02:08.303 LINK simple_copy 00:02:08.303 LINK mkfs 00:02:08.303 LINK reset 00:02:08.303 LINK sgl 00:02:08.303 LINK aer 00:02:08.303 LINK nvme_dp 00:02:08.303 LINK overhead 00:02:08.303 LINK fdp 00:02:08.303 LINK nvme_compliance 00:02:08.561 LINK dif 00:02:08.561 CC examples/accel/perf/accel_perf.o 00:02:08.819 CC examples/blob/cli/blobcli.o 00:02:08.819 CC examples/blob/hello_world/hello_blob.o 00:02:08.819 LINK hello_blob 00:02:09.078 LINK accel_perf 00:02:09.078 LINK cuse 00:02:09.078 LINK blobcli 00:02:09.646 CC examples/bdev/hello_world/hello_bdev.o 00:02:09.646 CC examples/bdev/bdevperf/bdevperf.o 00:02:09.905 LINK hello_bdev 00:02:10.164 CC test/bdev/bdevio/bdevio.o 00:02:10.164 LINK bdevperf 00:02:10.423 LINK bdevio 00:02:11.360 LINK esnap 00:02:11.619 CC examples/nvmf/nvmf/nvmf.o 00:02:11.879 LINK nvmf 00:02:13.259 00:02:13.259 real 0m45.939s 00:02:13.259 user 5m34.178s 00:02:13.259 sys 2m28.942s 00:02:13.259 16:18:52 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:13.259 16:18:52 make -- common/autotest_common.sh@10 -- $ set +x 00:02:13.259 ************************************ 00:02:13.259 END TEST make 00:02:13.259 ************************************ 00:02:13.259 16:18:52 -- common/autotest_common.sh@1142 -- $ return 0 00:02:13.259 16:18:52 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:13.259 16:18:52 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:13.259 16:18:52 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:13.259 16:18:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.259 16:18:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:13.259 16:18:52 -- pm/common@44 -- $ pid=1891437 00:02:13.259 16:18:52 -- pm/common@50 -- $ kill -TERM 1891437 00:02:13.259 16:18:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.260 16:18:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:13.260 16:18:52 -- pm/common@44 -- $ pid=1891439 00:02:13.260 16:18:52 -- pm/common@50 -- $ kill -TERM 1891439 00:02:13.260 16:18:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.260 16:18:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:13.260 16:18:52 -- pm/common@44 -- $ pid=1891441 00:02:13.260 16:18:52 -- pm/common@50 -- $ kill -TERM 1891441 00:02:13.260 16:18:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.260 16:18:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:13.260 16:18:52 -- pm/common@44 -- $ pid=1891466 00:02:13.260 16:18:52 -- pm/common@50 -- $ sudo -E kill -TERM 1891466 00:02:13.260 16:18:52 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:13.260 16:18:52 -- nvmf/common.sh@7 -- # uname -s 00:02:13.260 16:18:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:13.260 16:18:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:13.260 16:18:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:13.260 16:18:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:13.260 16:18:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:13.260 16:18:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:13.260 16:18:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:13.260 16:18:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:13.260 16:18:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:13.260 16:18:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:13.260 16:18:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:02:13.260 16:18:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:02:13.260 16:18:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:13.260 16:18:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:13.260 16:18:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:13.260 16:18:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:13.260 16:18:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:13.260 16:18:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:13.260 16:18:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:13.260 16:18:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:13.260 16:18:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.260 16:18:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.260 16:18:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.260 16:18:52 -- paths/export.sh@5 -- # export PATH 00:02:13.260 16:18:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.260 16:18:52 -- nvmf/common.sh@47 -- # : 0 00:02:13.260 16:18:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:13.260 16:18:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:13.260 16:18:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:13.260 16:18:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:13.260 16:18:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:13.260 16:18:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:13.260 16:18:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:13.260 16:18:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:13.260 16:18:52 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:13.260 16:18:52 -- spdk/autotest.sh@32 -- # uname -s 00:02:13.260 16:18:52 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:13.260 16:18:52 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:13.260 16:18:52 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:13.260 16:18:52 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:13.260 16:18:52 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:13.260 16:18:52 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:13.519 16:18:52 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:13.519 16:18:52 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:13.519 16:18:52 -- spdk/autotest.sh@48 -- # udevadm_pid=1954148 00:02:13.519 16:18:52 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:13.519 16:18:52 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:13.519 16:18:52 -- pm/common@17 -- # local monitor 00:02:13.519 16:18:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.519 16:18:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.519 16:18:52 -- pm/common@21 -- # date +%s 00:02:13.519 16:18:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.519 16:18:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.519 16:18:52 -- pm/common@21 -- # date +%s 00:02:13.519 16:18:52 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721053132 00:02:13.519 16:18:52 -- pm/common@21 -- # date +%s 00:02:13.519 16:18:52 -- pm/common@25 -- # sleep 1 00:02:13.519 16:18:52 -- pm/common@21 -- # date +%s 00:02:13.519 16:18:52 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721053132 00:02:13.519 16:18:52 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721053132 00:02:13.519 16:18:52 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721053132 00:02:13.519 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721053132_collect-cpu-load.pm.log 00:02:13.519 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721053132_collect-vmstat.pm.log 00:02:13.519 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721053132_collect-cpu-temp.pm.log 00:02:13.519 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721053132_collect-bmc-pm.bmc.pm.log 00:02:14.455 16:18:53 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:14.455 16:18:53 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:14.455 16:18:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:14.455 16:18:53 -- common/autotest_common.sh@10 -- # set +x 00:02:14.455 16:18:53 -- spdk/autotest.sh@59 -- # create_test_list 00:02:14.455 16:18:53 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:14.455 16:18:53 -- common/autotest_common.sh@10 -- # set +x 00:02:14.455 16:18:53 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:14.455 16:18:53 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:14.455 16:18:53 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:14.455 16:18:53 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:14.455 16:18:53 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:14.455 16:18:53 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:14.455 16:18:53 -- common/autotest_common.sh@1455 -- # uname 00:02:14.455 16:18:53 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:14.455 16:18:53 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:14.455 16:18:53 -- common/autotest_common.sh@1475 -- # uname 00:02:14.455 16:18:53 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:14.455 16:18:53 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:14.455 16:18:53 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:02:14.455 16:18:53 -- spdk/autotest.sh@72 -- # hash lcov 00:02:14.455 16:18:53 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:02:14.455 16:18:53 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:14.455 16:18:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:14.455 16:18:53 -- common/autotest_common.sh@10 -- # set +x 00:02:14.455 16:18:53 -- spdk/autotest.sh@91 -- # rm -f 00:02:14.455 16:18:53 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:17.745 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:17.745 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:17.745 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:17.745 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:17.745 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:17.745 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:17.745 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:17.745 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:17.745 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:17.745 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:18.004 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:18.004 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:18.004 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:18.004 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:18.004 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:18.004 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:18.004 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:18.004 16:18:57 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:18.004 16:18:57 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:18.004 16:18:57 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:18.004 16:18:57 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:18.004 16:18:57 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:18.004 16:18:57 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:18.004 16:18:57 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:18.004 16:18:57 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:18.004 16:18:57 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:18.004 16:18:57 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:18.004 16:18:57 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:18.004 16:18:57 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:18.004 16:18:57 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:18.004 16:18:57 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:18.004 16:18:57 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:18.004 No valid GPT data, bailing 00:02:18.004 16:18:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:18.262 16:18:57 -- scripts/common.sh@391 -- # pt= 00:02:18.262 16:18:57 -- scripts/common.sh@392 -- # return 1 00:02:18.262 16:18:57 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:18.262 1+0 records in 00:02:18.262 1+0 records out 00:02:18.262 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0037471 s, 280 MB/s 00:02:18.262 16:18:57 -- spdk/autotest.sh@118 -- # sync 00:02:18.262 16:18:57 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:18.262 16:18:57 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:18.262 16:18:57 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:26.388 16:19:04 -- spdk/autotest.sh@124 -- # uname -s 00:02:26.388 16:19:04 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:26.388 16:19:04 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:26.388 16:19:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:26.388 16:19:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:26.388 16:19:04 -- common/autotest_common.sh@10 -- # set +x 00:02:26.388 ************************************ 00:02:26.388 START TEST setup.sh 00:02:26.388 ************************************ 00:02:26.388 16:19:04 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:26.388 * Looking for test storage... 00:02:26.388 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:26.388 16:19:04 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:26.388 16:19:04 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:26.388 16:19:04 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:26.388 16:19:04 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:26.388 16:19:04 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:26.388 16:19:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:26.388 ************************************ 00:02:26.388 START TEST acl 00:02:26.388 ************************************ 00:02:26.388 16:19:04 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:26.388 * Looking for test storage... 00:02:26.388 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:26.388 16:19:04 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:26.388 16:19:04 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:26.388 16:19:04 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:26.388 16:19:04 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:26.388 16:19:04 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:26.388 16:19:04 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:26.388 16:19:04 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:26.388 16:19:04 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:26.388 16:19:04 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:26.388 16:19:04 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:26.388 16:19:04 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:26.388 16:19:04 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:26.388 16:19:04 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:26.388 16:19:04 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:26.388 16:19:04 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:26.388 16:19:04 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:29.680 16:19:08 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:29.680 16:19:08 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:29.680 16:19:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.680 16:19:08 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:29.680 16:19:08 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:29.681 16:19:08 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:02:32.969 Hugepages 00:02:32.969 node hugesize free / total 00:02:32.969 16:19:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:32.969 16:19:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:32.969 16:19:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.969 16:19:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:32.969 16:19:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:32.969 16:19:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.969 16:19:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:32.969 16:19:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:32.969 16:19:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.969 00:02:32.969 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:32.969 16:19:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:32.969 16:19:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:32.969 16:19:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:32.969 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.970 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.970 16:19:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.970 16:19:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:32.970 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.970 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.970 16:19:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.970 16:19:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:32.970 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.970 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.970 16:19:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.970 16:19:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:02:32.970 16:19:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:32.970 16:19:12 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:02:32.970 16:19:12 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:32.970 16:19:12 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:32.970 16:19:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.970 16:19:12 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:32.970 16:19:12 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:32.970 16:19:12 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:32.970 16:19:12 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:32.970 16:19:12 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:32.970 ************************************ 00:02:32.970 START TEST denied 00:02:32.970 ************************************ 00:02:32.970 16:19:12 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:32.970 16:19:12 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:02:32.970 16:19:12 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:32.970 16:19:12 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:02:32.970 16:19:12 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:32.970 16:19:12 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:36.272 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:02:36.272 16:19:15 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:02:36.272 16:19:15 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:36.272 16:19:15 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:36.272 16:19:15 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:02:36.272 16:19:15 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:02:36.272 16:19:15 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:36.272 16:19:15 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:36.272 16:19:15 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:36.272 16:19:15 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:36.272 16:19:15 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:41.596 00:02:41.596 real 0m8.142s 00:02:41.596 user 0m2.549s 00:02:41.596 sys 0m4.963s 00:02:41.596 16:19:20 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:41.596 16:19:20 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:41.596 ************************************ 00:02:41.596 END TEST denied 00:02:41.596 ************************************ 00:02:41.596 16:19:20 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:41.596 16:19:20 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:41.596 16:19:20 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:41.596 16:19:20 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:41.596 16:19:20 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:41.596 ************************************ 00:02:41.596 START TEST allowed 00:02:41.596 ************************************ 00:02:41.596 16:19:20 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:02:41.596 16:19:20 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:02:41.596 16:19:20 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:41.596 16:19:20 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:02:41.596 16:19:20 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:41.596 16:19:20 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:46.867 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:02:46.867 16:19:25 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:46.867 16:19:25 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:46.867 16:19:25 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:46.867 16:19:25 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:46.867 16:19:25 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:50.157 00:02:50.157 real 0m8.877s 00:02:50.157 user 0m2.554s 00:02:50.157 sys 0m4.885s 00:02:50.157 16:19:29 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:50.157 16:19:29 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:50.157 ************************************ 00:02:50.157 END TEST allowed 00:02:50.157 ************************************ 00:02:50.157 16:19:29 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:50.157 00:02:50.157 real 0m24.602s 00:02:50.157 user 0m7.790s 00:02:50.157 sys 0m15.021s 00:02:50.157 16:19:29 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:50.157 16:19:29 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:50.157 ************************************ 00:02:50.157 END TEST acl 00:02:50.157 ************************************ 00:02:50.157 16:19:29 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:02:50.157 16:19:29 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:02:50.157 16:19:29 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:50.157 16:19:29 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:50.157 16:19:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:50.157 ************************************ 00:02:50.157 START TEST hugepages 00:02:50.157 ************************************ 00:02:50.157 16:19:29 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:02:50.157 * Looking for test storage... 00:02:50.157 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:50.157 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:50.157 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:50.157 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:50.157 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:50.157 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:50.157 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:50.157 16:19:29 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:50.157 16:19:29 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:50.157 16:19:29 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:50.157 16:19:29 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:50.157 16:19:29 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.157 16:19:29 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.157 16:19:29 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.157 16:19:29 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.157 16:19:29 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.157 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.157 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 41622020 kB' 'MemAvailable: 43929776 kB' 'Buffers: 11496 kB' 'Cached: 10201672 kB' 'SwapCached: 16 kB' 'Active: 8534552 kB' 'Inactive: 2263108 kB' 'Active(anon): 8059112 kB' 'Inactive(anon): 57108 kB' 'Active(file): 475440 kB' 'Inactive(file): 2206000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 587776 kB' 'Mapped: 183528 kB' 'Shmem: 7531728 kB' 'KReclaimable: 246896 kB' 'Slab: 791360 kB' 'SReclaimable: 246896 kB' 'SUnreclaim: 544464 kB' 'KernelStack: 21904 kB' 'PageTables: 8600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439068 kB' 'Committed_AS: 9492972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213396 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.158 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:50.159 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:50.160 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:50.160 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:50.160 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:50.160 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:50.160 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:50.160 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:50.160 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:50.160 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:50.160 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:50.160 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:50.160 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:50.160 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:50.160 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:50.160 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:50.160 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:50.160 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:50.160 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:50.160 16:19:29 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:50.160 16:19:29 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:50.160 16:19:29 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:50.160 16:19:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:50.160 ************************************ 00:02:50.160 START TEST default_setup 00:02:50.160 ************************************ 00:02:50.160 16:19:29 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:02:50.160 16:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:50.160 16:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:50.160 16:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:50.160 16:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:50.160 16:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:50.160 16:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:50.160 16:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:50.160 16:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:50.160 16:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:50.160 16:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:50.160 16:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:50.160 16:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:50.160 16:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:50.160 16:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:50.160 16:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:50.160 16:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:50.160 16:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:50.160 16:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:50.160 16:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:50.160 16:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:50.160 16:19:29 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:50.160 16:19:29 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:02:53.446 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:53.446 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:53.446 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:53.446 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:53.446 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:53.446 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:53.446 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:53.446 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:53.446 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:53.446 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:53.446 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:53.446 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:53.446 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:53.446 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:53.446 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:53.446 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:55.346 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43829880 kB' 'MemAvailable: 46137636 kB' 'Buffers: 11496 kB' 'Cached: 10201796 kB' 'SwapCached: 16 kB' 'Active: 8552564 kB' 'Inactive: 2263108 kB' 'Active(anon): 8077124 kB' 'Inactive(anon): 57108 kB' 'Active(file): 475440 kB' 'Inactive(file): 2206000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 605728 kB' 'Mapped: 183696 kB' 'Shmem: 7531852 kB' 'KReclaimable: 246896 kB' 'Slab: 790748 kB' 'SReclaimable: 246896 kB' 'SUnreclaim: 543852 kB' 'KernelStack: 22032 kB' 'PageTables: 8996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9509688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213460 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.346 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.347 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43830632 kB' 'MemAvailable: 46138388 kB' 'Buffers: 11496 kB' 'Cached: 10201800 kB' 'SwapCached: 16 kB' 'Active: 8552584 kB' 'Inactive: 2263108 kB' 'Active(anon): 8077144 kB' 'Inactive(anon): 57108 kB' 'Active(file): 475440 kB' 'Inactive(file): 2206000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 605608 kB' 'Mapped: 183628 kB' 'Shmem: 7531856 kB' 'KReclaimable: 246896 kB' 'Slab: 790700 kB' 'SReclaimable: 246896 kB' 'SUnreclaim: 543804 kB' 'KernelStack: 22000 kB' 'PageTables: 8944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9509704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213508 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.348 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43832360 kB' 'MemAvailable: 46140116 kB' 'Buffers: 11496 kB' 'Cached: 10201820 kB' 'SwapCached: 16 kB' 'Active: 8552428 kB' 'Inactive: 2263108 kB' 'Active(anon): 8076988 kB' 'Inactive(anon): 57108 kB' 'Active(file): 475440 kB' 'Inactive(file): 2206000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 605428 kB' 'Mapped: 183648 kB' 'Shmem: 7531876 kB' 'KReclaimable: 246896 kB' 'Slab: 790796 kB' 'SReclaimable: 246896 kB' 'SUnreclaim: 543900 kB' 'KernelStack: 21984 kB' 'PageTables: 9008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9509724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213540 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.349 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:55.350 nr_hugepages=1024 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:55.350 resv_hugepages=0 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:55.350 surplus_hugepages=0 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:55.350 anon_hugepages=0 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43831220 kB' 'MemAvailable: 46138976 kB' 'Buffers: 11496 kB' 'Cached: 10201840 kB' 'SwapCached: 16 kB' 'Active: 8552404 kB' 'Inactive: 2263108 kB' 'Active(anon): 8076964 kB' 'Inactive(anon): 57108 kB' 'Active(file): 475440 kB' 'Inactive(file): 2206000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 605372 kB' 'Mapped: 183648 kB' 'Shmem: 7531896 kB' 'KReclaimable: 246896 kB' 'Slab: 790796 kB' 'SReclaimable: 246896 kB' 'SUnreclaim: 543900 kB' 'KernelStack: 22096 kB' 'PageTables: 9060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9509748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213572 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.350 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 25438184 kB' 'MemUsed: 7153900 kB' 'SwapCached: 16 kB' 'Active: 3388640 kB' 'Inactive: 180800 kB' 'Active(anon): 3172020 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3336640 kB' 'Mapped: 121828 kB' 'AnonPages: 235916 kB' 'Shmem: 2939220 kB' 'KernelStack: 12824 kB' 'PageTables: 4632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 136584 kB' 'Slab: 392436 kB' 'SReclaimable: 136584 kB' 'SUnreclaim: 255852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.351 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:55.352 node0=1024 expecting 1024 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:55.352 00:02:55.352 real 0m5.021s 00:02:55.352 user 0m1.296s 00:02:55.352 sys 0m2.232s 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:55.352 16:19:34 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:55.352 ************************************ 00:02:55.352 END TEST default_setup 00:02:55.352 ************************************ 00:02:55.352 16:19:34 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:55.352 16:19:34 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:55.352 16:19:34 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:55.352 16:19:34 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:55.352 16:19:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:55.352 ************************************ 00:02:55.352 START TEST per_node_1G_alloc 00:02:55.352 ************************************ 00:02:55.352 16:19:34 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:02:55.352 16:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:55.352 16:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:55.352 16:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:55.352 16:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:55.352 16:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:55.352 16:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:55.352 16:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:55.352 16:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:55.352 16:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:55.352 16:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:55.352 16:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:55.352 16:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:55.352 16:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:55.352 16:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:55.352 16:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:55.352 16:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:55.352 16:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:55.352 16:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:55.352 16:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:55.352 16:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:55.352 16:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:55.352 16:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:55.352 16:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:55.352 16:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:55.352 16:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:55.352 16:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:55.352 16:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:02:58.662 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:58.662 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:58.662 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:58.662 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:58.662 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:58.662 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:58.662 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:58.662 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:58.662 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:58.662 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:58.662 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:58.662 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:58.662 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:58.662 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:58.662 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:58.662 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:58.662 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:58.662 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:58.662 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:58.662 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:58.662 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:58.662 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:58.662 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:58.662 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:58.662 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:58.662 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:58.662 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:58.662 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:58.662 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:58.662 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:58.662 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:58.662 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.662 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.662 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.662 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.662 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.662 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.662 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43869116 kB' 'MemAvailable: 46176872 kB' 'Buffers: 11496 kB' 'Cached: 10201944 kB' 'SwapCached: 16 kB' 'Active: 8552016 kB' 'Inactive: 2263108 kB' 'Active(anon): 8076576 kB' 'Inactive(anon): 57108 kB' 'Active(file): 475440 kB' 'Inactive(file): 2206000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 604880 kB' 'Mapped: 182508 kB' 'Shmem: 7532000 kB' 'KReclaimable: 246896 kB' 'Slab: 789096 kB' 'SReclaimable: 246896 kB' 'SUnreclaim: 542200 kB' 'KernelStack: 22224 kB' 'PageTables: 9216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9503192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213716 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.663 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43870644 kB' 'MemAvailable: 46178400 kB' 'Buffers: 11496 kB' 'Cached: 10201948 kB' 'SwapCached: 16 kB' 'Active: 8552864 kB' 'Inactive: 2263108 kB' 'Active(anon): 8077424 kB' 'Inactive(anon): 57108 kB' 'Active(file): 475440 kB' 'Inactive(file): 2206000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 605820 kB' 'Mapped: 182464 kB' 'Shmem: 7532004 kB' 'KReclaimable: 246896 kB' 'Slab: 789088 kB' 'SReclaimable: 246896 kB' 'SUnreclaim: 542192 kB' 'KernelStack: 22240 kB' 'PageTables: 9412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9503208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213764 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.664 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.665 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43871064 kB' 'MemAvailable: 46178820 kB' 'Buffers: 11496 kB' 'Cached: 10201948 kB' 'SwapCached: 16 kB' 'Active: 8552816 kB' 'Inactive: 2263108 kB' 'Active(anon): 8077376 kB' 'Inactive(anon): 57108 kB' 'Active(file): 475440 kB' 'Inactive(file): 2206000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 605720 kB' 'Mapped: 182464 kB' 'Shmem: 7532004 kB' 'KReclaimable: 246896 kB' 'Slab: 789104 kB' 'SReclaimable: 246896 kB' 'SUnreclaim: 542208 kB' 'KernelStack: 22256 kB' 'PageTables: 9160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9503232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213732 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.666 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.667 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:58.668 nr_hugepages=1024 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:58.668 resv_hugepages=0 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:58.668 surplus_hugepages=0 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:58.668 anon_hugepages=0 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.668 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43872872 kB' 'MemAvailable: 46180628 kB' 'Buffers: 11496 kB' 'Cached: 10201988 kB' 'SwapCached: 16 kB' 'Active: 8552452 kB' 'Inactive: 2263108 kB' 'Active(anon): 8077012 kB' 'Inactive(anon): 57108 kB' 'Active(file): 475440 kB' 'Inactive(file): 2206000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 605320 kB' 'Mapped: 182464 kB' 'Shmem: 7532044 kB' 'KReclaimable: 246896 kB' 'Slab: 789136 kB' 'SReclaimable: 246896 kB' 'SUnreclaim: 542240 kB' 'KernelStack: 22192 kB' 'PageTables: 9044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9503252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213636 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.669 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 26506304 kB' 'MemUsed: 6085780 kB' 'SwapCached: 16 kB' 'Active: 3388632 kB' 'Inactive: 180800 kB' 'Active(anon): 3172012 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3336756 kB' 'Mapped: 121552 kB' 'AnonPages: 235852 kB' 'Shmem: 2939336 kB' 'KernelStack: 13000 kB' 'PageTables: 5116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 136584 kB' 'Slab: 391164 kB' 'SReclaimable: 136584 kB' 'SUnreclaim: 254580 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.670 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.671 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703148 kB' 'MemFree: 17366188 kB' 'MemUsed: 10336960 kB' 'SwapCached: 0 kB' 'Active: 5163412 kB' 'Inactive: 2082308 kB' 'Active(anon): 4904592 kB' 'Inactive(anon): 57092 kB' 'Active(file): 258820 kB' 'Inactive(file): 2025216 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6876768 kB' 'Mapped: 60912 kB' 'AnonPages: 369012 kB' 'Shmem: 4592732 kB' 'KernelStack: 9112 kB' 'PageTables: 3872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110312 kB' 'Slab: 397972 kB' 'SReclaimable: 110312 kB' 'SUnreclaim: 287660 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.672 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:58.673 node0=512 expecting 512 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:58.673 node1=512 expecting 512 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:58.673 00:02:58.673 real 0m3.363s 00:02:58.673 user 0m1.233s 00:02:58.673 sys 0m2.190s 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:58.673 16:19:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:58.673 ************************************ 00:02:58.673 END TEST per_node_1G_alloc 00:02:58.673 ************************************ 00:02:58.674 16:19:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:58.674 16:19:38 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:58.674 16:19:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:58.674 16:19:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:58.674 16:19:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:58.674 ************************************ 00:02:58.674 START TEST even_2G_alloc 00:02:58.674 ************************************ 00:02:58.674 16:19:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:02:58.674 16:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:58.674 16:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:58.674 16:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:58.674 16:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:58.674 16:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:58.674 16:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:58.674 16:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:58.674 16:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:58.674 16:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:58.674 16:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:58.674 16:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:58.674 16:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:58.674 16:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:58.674 16:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:58.674 16:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:58.674 16:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:58.674 16:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:58.674 16:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:58.674 16:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:58.674 16:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:58.674 16:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:58.674 16:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:58.674 16:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:58.674 16:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:58.674 16:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:58.674 16:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:58.674 16:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.674 16:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:01.964 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:01.964 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:01.964 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:01.964 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:01.964 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:01.964 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:01.964 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:01.964 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:01.964 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:01.964 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:01.964 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:01.964 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:01.964 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:01.964 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:01.964 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:01.964 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:01.964 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43919980 kB' 'MemAvailable: 46227736 kB' 'Buffers: 11496 kB' 'Cached: 10202108 kB' 'SwapCached: 16 kB' 'Active: 8553840 kB' 'Inactive: 2263108 kB' 'Active(anon): 8078400 kB' 'Inactive(anon): 57108 kB' 'Active(file): 475440 kB' 'Inactive(file): 2206000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 606748 kB' 'Mapped: 182476 kB' 'Shmem: 7532164 kB' 'KReclaimable: 246896 kB' 'Slab: 789196 kB' 'SReclaimable: 246896 kB' 'SUnreclaim: 542300 kB' 'KernelStack: 22096 kB' 'PageTables: 9188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9503872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213668 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.964 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.965 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43923504 kB' 'MemAvailable: 46231260 kB' 'Buffers: 11496 kB' 'Cached: 10202108 kB' 'SwapCached: 16 kB' 'Active: 8553492 kB' 'Inactive: 2263108 kB' 'Active(anon): 8078052 kB' 'Inactive(anon): 57108 kB' 'Active(file): 475440 kB' 'Inactive(file): 2206000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 606296 kB' 'Mapped: 182416 kB' 'Shmem: 7532164 kB' 'KReclaimable: 246896 kB' 'Slab: 789172 kB' 'SReclaimable: 246896 kB' 'SUnreclaim: 542276 kB' 'KernelStack: 21968 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9503888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213684 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.966 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.967 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.968 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43923616 kB' 'MemAvailable: 46231372 kB' 'Buffers: 11496 kB' 'Cached: 10202124 kB' 'SwapCached: 16 kB' 'Active: 8553820 kB' 'Inactive: 2263108 kB' 'Active(anon): 8078380 kB' 'Inactive(anon): 57108 kB' 'Active(file): 475440 kB' 'Inactive(file): 2206000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 606612 kB' 'Mapped: 182408 kB' 'Shmem: 7532180 kB' 'KReclaimable: 246896 kB' 'Slab: 789172 kB' 'SReclaimable: 246896 kB' 'SUnreclaim: 542276 kB' 'KernelStack: 22112 kB' 'PageTables: 9360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9502420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213716 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.969 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.970 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:01.971 nr_hugepages=1024 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:01.971 resv_hugepages=0 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:01.971 surplus_hugepages=0 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:01.971 anon_hugepages=0 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43924900 kB' 'MemAvailable: 46232656 kB' 'Buffers: 11496 kB' 'Cached: 10202132 kB' 'SwapCached: 16 kB' 'Active: 8552868 kB' 'Inactive: 2263108 kB' 'Active(anon): 8077428 kB' 'Inactive(anon): 57108 kB' 'Active(file): 475440 kB' 'Inactive(file): 2206000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 605712 kB' 'Mapped: 182476 kB' 'Shmem: 7532188 kB' 'KReclaimable: 246896 kB' 'Slab: 789292 kB' 'SReclaimable: 246896 kB' 'SUnreclaim: 542396 kB' 'KernelStack: 22128 kB' 'PageTables: 9016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9503932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213668 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.971 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.972 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:01.973 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 26541632 kB' 'MemUsed: 6050452 kB' 'SwapCached: 16 kB' 'Active: 3388668 kB' 'Inactive: 180800 kB' 'Active(anon): 3172048 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3336884 kB' 'Mapped: 122064 kB' 'AnonPages: 235764 kB' 'Shmem: 2939464 kB' 'KernelStack: 12888 kB' 'PageTables: 4800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 136584 kB' 'Slab: 391280 kB' 'SReclaimable: 136584 kB' 'SUnreclaim: 254696 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.974 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.975 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703148 kB' 'MemFree: 17373820 kB' 'MemUsed: 10329328 kB' 'SwapCached: 0 kB' 'Active: 5168632 kB' 'Inactive: 2082308 kB' 'Active(anon): 4909812 kB' 'Inactive(anon): 57092 kB' 'Active(file): 258820 kB' 'Inactive(file): 2025216 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6876804 kB' 'Mapped: 60916 kB' 'AnonPages: 374220 kB' 'Shmem: 4592768 kB' 'KernelStack: 9128 kB' 'PageTables: 4056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110312 kB' 'Slab: 398016 kB' 'SReclaimable: 110312 kB' 'SUnreclaim: 287704 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.976 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:01.977 node0=512 expecting 512 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:01.977 node1=512 expecting 512 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:01.977 00:03:01.977 real 0m3.011s 00:03:01.977 user 0m1.030s 00:03:01.977 sys 0m1.921s 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:01.977 16:19:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:01.977 ************************************ 00:03:01.977 END TEST even_2G_alloc 00:03:01.977 ************************************ 00:03:01.977 16:19:41 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:01.977 16:19:41 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:01.977 16:19:41 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:01.977 16:19:41 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:01.977 16:19:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:01.977 ************************************ 00:03:01.977 START TEST odd_alloc 00:03:01.977 ************************************ 00:03:01.977 16:19:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:01.977 16:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:01.977 16:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:01.977 16:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:01.977 16:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:01.977 16:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:01.977 16:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:01.977 16:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:01.977 16:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:01.977 16:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:01.977 16:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:01.977 16:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:01.977 16:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:01.977 16:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:01.977 16:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:01.977 16:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:01.977 16:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:01.977 16:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:01.978 16:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:01.978 16:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:01.978 16:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:01.978 16:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:01.978 16:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:01.978 16:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:01.978 16:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:01.978 16:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:01.978 16:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:01.978 16:19:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.978 16:19:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:05.296 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:05.296 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:05.296 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:05.296 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:05.296 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:05.296 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:05.296 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:05.296 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:05.296 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:05.296 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:05.296 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:05.296 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:05.296 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:05.296 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:05.296 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:05.296 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:05.296 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:05.296 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:05.296 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:05.296 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:05.296 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:05.296 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43907648 kB' 'MemAvailable: 46215404 kB' 'Buffers: 11496 kB' 'Cached: 10202272 kB' 'SwapCached: 16 kB' 'Active: 8553772 kB' 'Inactive: 2263108 kB' 'Active(anon): 8078332 kB' 'Inactive(anon): 57108 kB' 'Active(file): 475440 kB' 'Inactive(file): 2206000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 605944 kB' 'Mapped: 182924 kB' 'Shmem: 7532328 kB' 'KReclaimable: 246896 kB' 'Slab: 789508 kB' 'SReclaimable: 246896 kB' 'SUnreclaim: 542612 kB' 'KernelStack: 21952 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486620 kB' 'Committed_AS: 9502100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213540 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.297 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.298 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43908800 kB' 'MemAvailable: 46216556 kB' 'Buffers: 11496 kB' 'Cached: 10202276 kB' 'SwapCached: 16 kB' 'Active: 8552776 kB' 'Inactive: 2263108 kB' 'Active(anon): 8077336 kB' 'Inactive(anon): 57108 kB' 'Active(file): 475440 kB' 'Inactive(file): 2206000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 605448 kB' 'Mapped: 182836 kB' 'Shmem: 7532332 kB' 'KReclaimable: 246896 kB' 'Slab: 789540 kB' 'SReclaimable: 246896 kB' 'SUnreclaim: 542644 kB' 'KernelStack: 21904 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486620 kB' 'Committed_AS: 9502116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213492 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.299 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.300 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.301 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43909208 kB' 'MemAvailable: 46216964 kB' 'Buffers: 11496 kB' 'Cached: 10202292 kB' 'SwapCached: 16 kB' 'Active: 8552852 kB' 'Inactive: 2263108 kB' 'Active(anon): 8077412 kB' 'Inactive(anon): 57108 kB' 'Active(file): 475440 kB' 'Inactive(file): 2206000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 605484 kB' 'Mapped: 182484 kB' 'Shmem: 7532348 kB' 'KReclaimable: 246896 kB' 'Slab: 789540 kB' 'SReclaimable: 246896 kB' 'SUnreclaim: 542644 kB' 'KernelStack: 21904 kB' 'PageTables: 8356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486620 kB' 'Committed_AS: 9502136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213492 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.302 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.303 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:05.304 nr_hugepages=1025 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:05.304 resv_hugepages=0 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:05.304 surplus_hugepages=0 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:05.304 anon_hugepages=0 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43908956 kB' 'MemAvailable: 46216712 kB' 'Buffers: 11496 kB' 'Cached: 10202316 kB' 'SwapCached: 16 kB' 'Active: 8552864 kB' 'Inactive: 2263108 kB' 'Active(anon): 8077424 kB' 'Inactive(anon): 57108 kB' 'Active(file): 475440 kB' 'Inactive(file): 2206000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 605484 kB' 'Mapped: 182484 kB' 'Shmem: 7532372 kB' 'KReclaimable: 246896 kB' 'Slab: 789540 kB' 'SReclaimable: 246896 kB' 'SUnreclaim: 542644 kB' 'KernelStack: 21904 kB' 'PageTables: 8356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486620 kB' 'Committed_AS: 9502156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213492 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.304 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.305 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 26543064 kB' 'MemUsed: 6049020 kB' 'SwapCached: 16 kB' 'Active: 3390760 kB' 'Inactive: 180800 kB' 'Active(anon): 3174140 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3337016 kB' 'Mapped: 121572 kB' 'AnonPages: 237784 kB' 'Shmem: 2939596 kB' 'KernelStack: 12904 kB' 'PageTables: 4868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 136584 kB' 'Slab: 391316 kB' 'SReclaimable: 136584 kB' 'SUnreclaim: 254732 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.306 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703148 kB' 'MemFree: 17366168 kB' 'MemUsed: 10336980 kB' 'SwapCached: 0 kB' 'Active: 5162264 kB' 'Inactive: 2082308 kB' 'Active(anon): 4903444 kB' 'Inactive(anon): 57092 kB' 'Active(file): 258820 kB' 'Inactive(file): 2025216 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6876828 kB' 'Mapped: 60912 kB' 'AnonPages: 367788 kB' 'Shmem: 4592792 kB' 'KernelStack: 8968 kB' 'PageTables: 3428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110312 kB' 'Slab: 398224 kB' 'SReclaimable: 110312 kB' 'SUnreclaim: 287912 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.307 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.308 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.309 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.309 16:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:05.309 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:05.309 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:05.309 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:05.309 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:05.309 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:05.309 node0=512 expecting 513 00:03:05.309 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:05.309 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:05.309 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:05.309 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:05.309 node1=513 expecting 512 00:03:05.309 16:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:05.309 00:03:05.309 real 0m3.600s 00:03:05.309 user 0m1.380s 00:03:05.309 sys 0m2.287s 00:03:05.309 16:19:44 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:05.309 16:19:44 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:05.309 ************************************ 00:03:05.309 END TEST odd_alloc 00:03:05.309 ************************************ 00:03:05.568 16:19:44 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:05.568 16:19:44 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:05.568 16:19:44 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:05.568 16:19:44 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:05.568 16:19:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:05.568 ************************************ 00:03:05.568 START TEST custom_alloc 00:03:05.568 ************************************ 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.568 16:19:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:08.860 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:08.860 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:08.860 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:08.860 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:08.860 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:08.860 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:08.860 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:08.860 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:08.860 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:08.860 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:08.860 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:08.860 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:08.860 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:08.860 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:08.860 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:08.860 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:08.860 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42904480 kB' 'MemAvailable: 45212236 kB' 'Buffers: 11496 kB' 'Cached: 10202432 kB' 'SwapCached: 16 kB' 'Active: 8554356 kB' 'Inactive: 2263108 kB' 'Active(anon): 8078916 kB' 'Inactive(anon): 57108 kB' 'Active(file): 475440 kB' 'Inactive(file): 2206000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 606256 kB' 'Mapped: 182616 kB' 'Shmem: 7532488 kB' 'KReclaimable: 246896 kB' 'Slab: 789200 kB' 'SReclaimable: 246896 kB' 'SUnreclaim: 542304 kB' 'KernelStack: 21872 kB' 'PageTables: 8320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963356 kB' 'Committed_AS: 9502920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213460 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.860 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.861 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42905100 kB' 'MemAvailable: 45212856 kB' 'Buffers: 11496 kB' 'Cached: 10202436 kB' 'SwapCached: 16 kB' 'Active: 8553640 kB' 'Inactive: 2263108 kB' 'Active(anon): 8078200 kB' 'Inactive(anon): 57108 kB' 'Active(file): 475440 kB' 'Inactive(file): 2206000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 606016 kB' 'Mapped: 182500 kB' 'Shmem: 7532492 kB' 'KReclaimable: 246896 kB' 'Slab: 789144 kB' 'SReclaimable: 246896 kB' 'SUnreclaim: 542248 kB' 'KernelStack: 21888 kB' 'PageTables: 8356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963356 kB' 'Committed_AS: 9502936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213428 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.862 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.863 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42905108 kB' 'MemAvailable: 45212864 kB' 'Buffers: 11496 kB' 'Cached: 10202456 kB' 'SwapCached: 16 kB' 'Active: 8553656 kB' 'Inactive: 2263108 kB' 'Active(anon): 8078216 kB' 'Inactive(anon): 57108 kB' 'Active(file): 475440 kB' 'Inactive(file): 2206000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 606020 kB' 'Mapped: 182500 kB' 'Shmem: 7532512 kB' 'KReclaimable: 246896 kB' 'Slab: 789144 kB' 'SReclaimable: 246896 kB' 'SUnreclaim: 542248 kB' 'KernelStack: 21888 kB' 'PageTables: 8356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963356 kB' 'Committed_AS: 9502956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213428 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.864 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:08.865 nr_hugepages=1536 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:08.865 resv_hugepages=0 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:08.865 surplus_hugepages=0 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:08.865 anon_hugepages=0 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.865 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42906088 kB' 'MemAvailable: 45213844 kB' 'Buffers: 11496 kB' 'Cached: 10202460 kB' 'SwapCached: 16 kB' 'Active: 8553320 kB' 'Inactive: 2263108 kB' 'Active(anon): 8077880 kB' 'Inactive(anon): 57108 kB' 'Active(file): 475440 kB' 'Inactive(file): 2206000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 605680 kB' 'Mapped: 182500 kB' 'Shmem: 7532516 kB' 'KReclaimable: 246896 kB' 'Slab: 789144 kB' 'SReclaimable: 246896 kB' 'SUnreclaim: 542248 kB' 'KernelStack: 21872 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963356 kB' 'Committed_AS: 9502980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213428 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.866 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 26541196 kB' 'MemUsed: 6050888 kB' 'SwapCached: 16 kB' 'Active: 3389348 kB' 'Inactive: 180800 kB' 'Active(anon): 3172728 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3337064 kB' 'Mapped: 121584 kB' 'AnonPages: 236232 kB' 'Shmem: 2939644 kB' 'KernelStack: 12888 kB' 'PageTables: 4872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 136584 kB' 'Slab: 391016 kB' 'SReclaimable: 136584 kB' 'SUnreclaim: 254432 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.867 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.868 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703148 kB' 'MemFree: 16365124 kB' 'MemUsed: 11338024 kB' 'SwapCached: 0 kB' 'Active: 5164208 kB' 'Inactive: 2082308 kB' 'Active(anon): 4905388 kB' 'Inactive(anon): 57092 kB' 'Active(file): 258820 kB' 'Inactive(file): 2025216 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6876964 kB' 'Mapped: 60916 kB' 'AnonPages: 369640 kB' 'Shmem: 4592928 kB' 'KernelStack: 8984 kB' 'PageTables: 3424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110312 kB' 'Slab: 398128 kB' 'SReclaimable: 110312 kB' 'SUnreclaim: 287816 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:08.870 node0=512 expecting 512 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:08.870 node1=1024 expecting 1024 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:08.870 00:03:08.870 real 0m3.422s 00:03:08.870 user 0m1.227s 00:03:08.870 sys 0m2.181s 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:08.870 16:19:48 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:08.870 ************************************ 00:03:08.870 END TEST custom_alloc 00:03:08.870 ************************************ 00:03:08.870 16:19:48 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:08.870 16:19:48 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:08.870 16:19:48 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:08.870 16:19:48 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.870 16:19:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:08.870 ************************************ 00:03:08.870 START TEST no_shrink_alloc 00:03:08.870 ************************************ 00:03:08.870 16:19:48 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:08.870 16:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:08.870 16:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:08.870 16:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:08.870 16:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:08.870 16:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:08.870 16:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:08.870 16:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:08.870 16:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:08.870 16:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:08.870 16:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:08.870 16:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:08.870 16:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:08.870 16:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:08.870 16:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:08.870 16:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:08.870 16:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:08.870 16:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:08.870 16:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:08.870 16:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:08.870 16:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:08.870 16:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.870 16:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:12.156 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:12.156 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:12.156 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:12.156 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:12.156 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:12.156 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:12.156 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:12.156 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:12.156 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:12.156 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:12.156 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:12.156 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:12.156 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:12.156 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:12.156 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:12.156 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:12.156 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:12.418 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:12.418 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43902900 kB' 'MemAvailable: 46210656 kB' 'Buffers: 11496 kB' 'Cached: 10202592 kB' 'SwapCached: 16 kB' 'Active: 8555936 kB' 'Inactive: 2263108 kB' 'Active(anon): 8080496 kB' 'Inactive(anon): 57108 kB' 'Active(file): 475440 kB' 'Inactive(file): 2206000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607740 kB' 'Mapped: 182636 kB' 'Shmem: 7532648 kB' 'KReclaimable: 246896 kB' 'Slab: 789636 kB' 'SReclaimable: 246896 kB' 'SUnreclaim: 542740 kB' 'KernelStack: 22000 kB' 'PageTables: 9052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9504716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213732 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.419 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43903236 kB' 'MemAvailable: 46210992 kB' 'Buffers: 11496 kB' 'Cached: 10202596 kB' 'SwapCached: 16 kB' 'Active: 8555564 kB' 'Inactive: 2263108 kB' 'Active(anon): 8080124 kB' 'Inactive(anon): 57108 kB' 'Active(file): 475440 kB' 'Inactive(file): 2206000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607824 kB' 'Mapped: 182504 kB' 'Shmem: 7532652 kB' 'KReclaimable: 246896 kB' 'Slab: 789568 kB' 'SReclaimable: 246896 kB' 'SUnreclaim: 542672 kB' 'KernelStack: 22032 kB' 'PageTables: 8812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9506228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213620 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.420 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43911536 kB' 'MemAvailable: 46219292 kB' 'Buffers: 11496 kB' 'Cached: 10202612 kB' 'SwapCached: 16 kB' 'Active: 8555716 kB' 'Inactive: 2263108 kB' 'Active(anon): 8080276 kB' 'Inactive(anon): 57108 kB' 'Active(file): 475440 kB' 'Inactive(file): 2206000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608496 kB' 'Mapped: 182504 kB' 'Shmem: 7532668 kB' 'KReclaimable: 246896 kB' 'Slab: 789440 kB' 'SReclaimable: 246896 kB' 'SUnreclaim: 542544 kB' 'KernelStack: 22224 kB' 'PageTables: 9548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9504756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213636 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.421 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:12.422 nr_hugepages=1024 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:12.422 resv_hugepages=0 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:12.422 surplus_hugepages=0 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:12.422 anon_hugepages=0 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.422 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43912820 kB' 'MemAvailable: 46220576 kB' 'Buffers: 11496 kB' 'Cached: 10202652 kB' 'SwapCached: 16 kB' 'Active: 8554620 kB' 'Inactive: 2263108 kB' 'Active(anon): 8079180 kB' 'Inactive(anon): 57108 kB' 'Active(file): 475440 kB' 'Inactive(file): 2206000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 606904 kB' 'Mapped: 182504 kB' 'Shmem: 7532708 kB' 'KReclaimable: 246896 kB' 'Slab: 789284 kB' 'SReclaimable: 246896 kB' 'SUnreclaim: 542388 kB' 'KernelStack: 21872 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9503656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213508 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.423 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 25485420 kB' 'MemUsed: 7106664 kB' 'SwapCached: 16 kB' 'Active: 3392376 kB' 'Inactive: 180800 kB' 'Active(anon): 3175756 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3337092 kB' 'Mapped: 121592 kB' 'AnonPages: 239376 kB' 'Shmem: 2939672 kB' 'KernelStack: 12936 kB' 'PageTables: 5052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 136584 kB' 'Slab: 391052 kB' 'SReclaimable: 136584 kB' 'SUnreclaim: 254468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.424 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.425 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.425 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.425 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.425 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.425 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.425 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.425 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.425 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.425 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.425 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.425 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.425 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.425 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.425 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.425 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.425 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.425 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:12.425 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:12.425 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:12.425 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:12.425 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:12.425 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:12.426 node0=1024 expecting 1024 00:03:12.426 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:12.426 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:12.426 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:12.426 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:12.426 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.426 16:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:15.785 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:15.785 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:15.785 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:15.785 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:15.785 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:15.785 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:15.785 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:15.785 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:15.785 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:15.785 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:15.785 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:15.785 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:15.785 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:15.785 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:15.785 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:15.785 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:15.785 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:15.785 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:15.785 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:15.785 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:15.785 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:15.785 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:15.785 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:15.785 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:15.785 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:15.785 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:15.785 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:15.785 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:15.785 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:15.785 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:15.785 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.785 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.785 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.785 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.785 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.785 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.785 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43904604 kB' 'MemAvailable: 46212360 kB' 'Buffers: 11496 kB' 'Cached: 10202740 kB' 'SwapCached: 16 kB' 'Active: 8556552 kB' 'Inactive: 2263108 kB' 'Active(anon): 8081112 kB' 'Inactive(anon): 57108 kB' 'Active(file): 475440 kB' 'Inactive(file): 2206000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608728 kB' 'Mapped: 182520 kB' 'Shmem: 7532796 kB' 'KReclaimable: 246896 kB' 'Slab: 790076 kB' 'SReclaimable: 246896 kB' 'SUnreclaim: 543180 kB' 'KernelStack: 21904 kB' 'PageTables: 8464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9504092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213412 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.786 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:15.787 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:16.050 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:16.050 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.050 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:16.050 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:16.050 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.050 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.050 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.050 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.050 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.050 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.050 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.050 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.050 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43906000 kB' 'MemAvailable: 46213756 kB' 'Buffers: 11496 kB' 'Cached: 10202756 kB' 'SwapCached: 16 kB' 'Active: 8555056 kB' 'Inactive: 2263108 kB' 'Active(anon): 8079616 kB' 'Inactive(anon): 57108 kB' 'Active(file): 475440 kB' 'Inactive(file): 2206000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607156 kB' 'Mapped: 182512 kB' 'Shmem: 7532812 kB' 'KReclaimable: 246896 kB' 'Slab: 790120 kB' 'SReclaimable: 246896 kB' 'SUnreclaim: 543224 kB' 'KernelStack: 21840 kB' 'PageTables: 8164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9504108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213364 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:16.050 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.050 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.051 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43906360 kB' 'MemAvailable: 46214116 kB' 'Buffers: 11496 kB' 'Cached: 10202760 kB' 'SwapCached: 16 kB' 'Active: 8555432 kB' 'Inactive: 2263108 kB' 'Active(anon): 8079992 kB' 'Inactive(anon): 57108 kB' 'Active(file): 475440 kB' 'Inactive(file): 2206000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607556 kB' 'Mapped: 182512 kB' 'Shmem: 7532816 kB' 'KReclaimable: 246896 kB' 'Slab: 790120 kB' 'SReclaimable: 246896 kB' 'SUnreclaim: 543224 kB' 'KernelStack: 21856 kB' 'PageTables: 8224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9504140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213364 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.052 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.053 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:16.054 nr_hugepages=1024 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:16.054 resv_hugepages=0 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:16.054 surplus_hugepages=0 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:16.054 anon_hugepages=0 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43905604 kB' 'MemAvailable: 46213360 kB' 'Buffers: 11496 kB' 'Cached: 10202760 kB' 'SwapCached: 16 kB' 'Active: 8555432 kB' 'Inactive: 2263108 kB' 'Active(anon): 8079992 kB' 'Inactive(anon): 57108 kB' 'Active(file): 475440 kB' 'Inactive(file): 2206000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607556 kB' 'Mapped: 182512 kB' 'Shmem: 7532816 kB' 'KReclaimable: 246896 kB' 'Slab: 790120 kB' 'SReclaimable: 246896 kB' 'SUnreclaim: 543224 kB' 'KernelStack: 21856 kB' 'PageTables: 8224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9504292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213364 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.054 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.055 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.055 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.055 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.055 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.055 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.055 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.055 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.055 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.055 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.055 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.055 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.055 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.055 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.055 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.055 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.055 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.055 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.055 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.055 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.055 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.055 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.055 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.055 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.055 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.056 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 25469716 kB' 'MemUsed: 7122368 kB' 'SwapCached: 16 kB' 'Active: 3391964 kB' 'Inactive: 180800 kB' 'Active(anon): 3175344 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3337136 kB' 'Mapped: 121600 kB' 'AnonPages: 238844 kB' 'Shmem: 2939716 kB' 'KernelStack: 12920 kB' 'PageTables: 4996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 136584 kB' 'Slab: 391792 kB' 'SReclaimable: 136584 kB' 'SUnreclaim: 255208 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.057 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:16.058 node0=1024 expecting 1024 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:16.058 00:03:16.058 real 0m7.053s 00:03:16.058 user 0m2.660s 00:03:16.058 sys 0m4.491s 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:16.058 16:19:55 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:16.058 ************************************ 00:03:16.058 END TEST no_shrink_alloc 00:03:16.058 ************************************ 00:03:16.058 16:19:55 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:16.058 16:19:55 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:16.058 16:19:55 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:16.058 16:19:55 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:16.058 16:19:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:16.058 16:19:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:16.058 16:19:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:16.058 16:19:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:16.058 16:19:55 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:16.058 16:19:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:16.058 16:19:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:16.058 16:19:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:16.058 16:19:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:16.058 16:19:55 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:16.058 16:19:55 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:16.058 00:03:16.058 real 0m26.068s 00:03:16.058 user 0m9.041s 00:03:16.058 sys 0m15.729s 00:03:16.058 16:19:55 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:16.058 16:19:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:16.058 ************************************ 00:03:16.058 END TEST hugepages 00:03:16.058 ************************************ 00:03:16.058 16:19:55 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:16.058 16:19:55 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:16.058 16:19:55 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:16.058 16:19:55 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:16.058 16:19:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:16.058 ************************************ 00:03:16.058 START TEST driver 00:03:16.058 ************************************ 00:03:16.058 16:19:55 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:16.317 * Looking for test storage... 00:03:16.317 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:16.317 16:19:55 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:16.317 16:19:55 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:16.317 16:19:55 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:21.587 16:20:00 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:21.587 16:20:00 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:21.587 16:20:00 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:21.587 16:20:00 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:21.587 ************************************ 00:03:21.587 START TEST guess_driver 00:03:21.587 ************************************ 00:03:21.587 16:20:00 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:21.587 16:20:00 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:21.587 16:20:00 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:21.587 16:20:00 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:21.587 16:20:00 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:21.587 16:20:00 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:21.587 16:20:00 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:21.587 16:20:00 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:21.587 16:20:00 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:21.587 16:20:00 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:21.587 16:20:00 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:03:21.587 16:20:00 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:21.587 16:20:00 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:21.587 16:20:00 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:21.587 16:20:00 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:21.587 16:20:00 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:21.587 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:21.587 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:21.587 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:21.587 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:21.588 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:21.588 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:21.588 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:21.588 16:20:00 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:21.588 16:20:00 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:21.588 16:20:00 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:21.588 16:20:00 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:21.588 16:20:00 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:21.588 Looking for driver=vfio-pci 00:03:21.588 16:20:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.588 16:20:00 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:21.588 16:20:00 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.588 16:20:00 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.872 16:20:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.250 16:20:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.250 16:20:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.250 16:20:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.250 16:20:05 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:26.250 16:20:05 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:26.250 16:20:05 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:26.250 16:20:05 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:30.436 00:03:30.436 real 0m9.279s 00:03:30.436 user 0m2.326s 00:03:30.436 sys 0m4.580s 00:03:30.436 16:20:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:30.436 16:20:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:30.436 ************************************ 00:03:30.436 END TEST guess_driver 00:03:30.436 ************************************ 00:03:30.436 16:20:09 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:30.436 00:03:30.436 real 0m14.170s 00:03:30.436 user 0m3.733s 00:03:30.436 sys 0m7.280s 00:03:30.436 16:20:09 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:30.436 16:20:09 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:30.436 ************************************ 00:03:30.436 END TEST driver 00:03:30.436 ************************************ 00:03:30.436 16:20:09 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:30.436 16:20:09 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:30.436 16:20:09 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.436 16:20:09 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.436 16:20:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:30.436 ************************************ 00:03:30.436 START TEST devices 00:03:30.436 ************************************ 00:03:30.436 16:20:09 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:30.436 * Looking for test storage... 00:03:30.436 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:30.436 16:20:09 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:30.436 16:20:09 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:30.436 16:20:09 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:30.436 16:20:09 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.617 16:20:13 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:34.617 16:20:13 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:34.617 16:20:13 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:34.617 16:20:13 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:34.617 16:20:13 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:34.617 16:20:13 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:34.617 16:20:13 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:34.617 16:20:13 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:34.617 16:20:13 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:34.617 16:20:13 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:34.617 16:20:13 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:34.617 16:20:13 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:34.617 16:20:13 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:34.617 16:20:13 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:34.617 16:20:13 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:34.617 16:20:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:34.617 16:20:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:34.617 16:20:13 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:03:34.617 16:20:13 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:34.617 16:20:13 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:34.617 16:20:13 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:34.618 16:20:13 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:34.618 No valid GPT data, bailing 00:03:34.618 16:20:13 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:34.618 16:20:13 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:34.618 16:20:13 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:34.618 16:20:13 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:34.618 16:20:13 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:34.618 16:20:13 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:34.618 16:20:13 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:03:34.618 16:20:13 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:03:34.618 16:20:13 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:34.618 16:20:13 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:03:34.618 16:20:13 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:34.618 16:20:13 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:34.618 16:20:13 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:34.618 16:20:13 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.618 16:20:13 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.618 16:20:13 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:34.618 ************************************ 00:03:34.618 START TEST nvme_mount 00:03:34.618 ************************************ 00:03:34.618 16:20:13 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:34.618 16:20:13 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:34.618 16:20:13 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:34.618 16:20:13 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.618 16:20:13 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:34.618 16:20:13 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:34.618 16:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:34.618 16:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:34.618 16:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:34.618 16:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:34.618 16:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:34.618 16:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:34.618 16:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:34.618 16:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:34.618 16:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:34.618 16:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:34.618 16:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:34.618 16:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:34.618 16:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:34.618 16:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:35.184 Creating new GPT entries in memory. 00:03:35.184 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:35.184 other utilities. 00:03:35.184 16:20:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:35.184 16:20:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:35.184 16:20:14 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:35.184 16:20:14 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:35.184 16:20:14 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:36.118 Creating new GPT entries in memory. 00:03:36.118 The operation has completed successfully. 00:03:36.118 16:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:36.118 16:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:36.118 16:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1984522 00:03:36.118 16:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.118 16:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:36.118 16:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.118 16:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:36.118 16:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:36.118 16:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.377 16:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:36.377 16:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:36.377 16:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:36.377 16:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.377 16:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:36.377 16:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:36.377 16:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:36.377 16:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:36.377 16:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:36.377 16:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.377 16:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:36.377 16:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:36.377 16:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.377 16:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:39.661 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:39.661 16:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:39.661 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:39.661 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:39.661 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:39.661 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:39.661 16:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:39.661 16:20:19 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:39.661 16:20:19 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.661 16:20:19 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:39.661 16:20:19 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:39.919 16:20:19 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.919 16:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:39.919 16:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:39.919 16:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:39.919 16:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.919 16:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:39.919 16:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:39.919 16:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:39.919 16:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:39.919 16:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:39.919 16:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.919 16:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:39.919 16:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:39.919 16:20:19 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.919 16:20:19 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.207 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.208 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.208 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:43.208 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:43.208 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.208 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:43.208 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:43.208 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.208 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:43.208 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.208 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.208 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:03:43.208 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:43.208 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:43.208 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:43.208 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:43.208 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:43.208 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:43.208 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:43.208 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.208 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:43.208 16:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:43.208 16:20:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.208 16:20:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:46.489 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:46.489 00:03:46.489 real 0m12.106s 00:03:46.489 user 0m3.586s 00:03:46.489 sys 0m6.421s 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.489 16:20:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:46.489 ************************************ 00:03:46.489 END TEST nvme_mount 00:03:46.489 ************************************ 00:03:46.489 16:20:25 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:46.489 16:20:25 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:46.489 16:20:25 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.489 16:20:25 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.489 16:20:25 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:46.489 ************************************ 00:03:46.489 START TEST dm_mount 00:03:46.489 ************************************ 00:03:46.489 16:20:25 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:46.489 16:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:46.489 16:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:46.489 16:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:46.489 16:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:46.489 16:20:25 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:46.489 16:20:25 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:46.489 16:20:25 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:46.490 16:20:25 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:46.490 16:20:25 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:46.490 16:20:25 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:46.490 16:20:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:46.490 16:20:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:46.490 16:20:25 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:46.490 16:20:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:46.490 16:20:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:46.490 16:20:25 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:46.490 16:20:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:46.490 16:20:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:46.490 16:20:25 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:46.490 16:20:25 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:46.490 16:20:25 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:47.423 Creating new GPT entries in memory. 00:03:47.423 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:47.423 other utilities. 00:03:47.423 16:20:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:47.423 16:20:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:47.423 16:20:26 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:47.423 16:20:26 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:47.423 16:20:26 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:48.358 Creating new GPT entries in memory. 00:03:48.358 The operation has completed successfully. 00:03:48.358 16:20:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:48.358 16:20:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:48.358 16:20:27 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:48.358 16:20:27 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:48.358 16:20:27 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:49.292 The operation has completed successfully. 00:03:49.292 16:20:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:49.292 16:20:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:49.292 16:20:28 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1988815 00:03:49.292 16:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:49.292 16:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:49.292 16:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:49.292 16:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:49.549 16:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:49.549 16:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:49.549 16:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:49.549 16:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:49.549 16:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:49.549 16:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:49.549 16:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:49.549 16:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:49.549 16:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:49.549 16:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:49.549 16:20:28 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:03:49.549 16:20:28 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:49.549 16:20:28 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:49.549 16:20:28 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:49.549 16:20:28 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:49.549 16:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:49.549 16:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:49.549 16:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:49.549 16:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:49.549 16:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:49.549 16:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:49.549 16:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:49.549 16:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:49.549 16:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:49.549 16:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.549 16:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:49.549 16:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:49.549 16:20:28 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.549 16:20:28 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:52.889 16:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.889 16:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:52.889 16:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:52.889 16:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:52.889 16:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:52.889 16:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:52.889 16:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:52.889 16:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:52.889 16:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:52.889 16:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:52.889 16:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:52.889 16:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:52.889 16:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:52.889 16:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:52.889 16:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:52.889 16:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.889 16:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:52.889 16:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:52.889 16:20:32 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.889 16:20:32 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:55.423 16:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.682 16:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:55.682 16:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:55.682 16:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:55.682 16:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:55.682 16:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:55.682 16:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:55.682 16:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:55.682 16:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:55.682 16:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:55.682 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:55.682 16:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:55.682 16:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:55.682 00:03:55.682 real 0m9.394s 00:03:55.682 user 0m2.204s 00:03:55.682 sys 0m4.214s 00:03:55.682 16:20:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.682 16:20:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:55.682 ************************************ 00:03:55.682 END TEST dm_mount 00:03:55.682 ************************************ 00:03:55.682 16:20:35 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:55.682 16:20:35 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:55.682 16:20:35 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:55.682 16:20:35 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.682 16:20:35 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:55.682 16:20:35 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:55.682 16:20:35 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:55.682 16:20:35 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:55.942 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:55.942 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:55.942 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:55.942 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:55.942 16:20:35 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:55.942 16:20:35 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:55.942 16:20:35 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:55.942 16:20:35 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:55.942 16:20:35 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:55.942 16:20:35 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:55.942 16:20:35 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:55.942 00:03:55.942 real 0m25.624s 00:03:55.942 user 0m7.095s 00:03:55.942 sys 0m13.261s 00:03:55.942 16:20:35 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.942 16:20:35 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:55.942 ************************************ 00:03:55.942 END TEST devices 00:03:55.942 ************************************ 00:03:55.942 16:20:35 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:55.942 00:03:55.942 real 1m30.877s 00:03:55.942 user 0m27.814s 00:03:55.942 sys 0m51.580s 00:03:55.942 16:20:35 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.942 16:20:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:55.942 ************************************ 00:03:55.942 END TEST setup.sh 00:03:55.942 ************************************ 00:03:56.200 16:20:35 -- common/autotest_common.sh@1142 -- # return 0 00:03:56.200 16:20:35 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:59.486 Hugepages 00:03:59.486 node hugesize free / total 00:03:59.486 node0 1048576kB 0 / 0 00:03:59.486 node0 2048kB 2048 / 2048 00:03:59.486 node1 1048576kB 0 / 0 00:03:59.486 node1 2048kB 0 / 0 00:03:59.486 00:03:59.486 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:59.486 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:59.486 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:59.486 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:59.486 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:59.486 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:59.486 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:59.486 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:59.486 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:59.486 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:59.486 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:59.486 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:59.486 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:59.486 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:59.486 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:59.486 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:59.486 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:59.486 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:59.486 16:20:39 -- spdk/autotest.sh@130 -- # uname -s 00:03:59.486 16:20:39 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:59.486 16:20:39 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:59.486 16:20:39 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:02.767 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:02.767 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:02.767 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:03.025 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:03.025 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:03.025 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:03.025 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:03.025 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:03.025 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:03.025 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:03.025 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:03.025 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:03.025 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:03.025 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:03.025 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:03.025 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:04.928 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:04.928 16:20:44 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:05.864 16:20:45 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:05.864 16:20:45 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:05.864 16:20:45 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:05.864 16:20:45 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:05.864 16:20:45 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:05.864 16:20:45 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:05.864 16:20:45 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:05.864 16:20:45 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:05.864 16:20:45 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:05.864 16:20:45 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:05.864 16:20:45 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:04:05.864 16:20:45 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:09.143 Waiting for block devices as requested 00:04:09.143 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:09.143 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:09.143 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:09.143 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:09.143 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:09.143 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:09.143 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:09.143 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:09.143 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:09.401 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:09.401 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:09.401 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:09.658 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:09.658 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:09.658 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:09.916 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:09.916 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:10.175 16:20:49 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:10.175 16:20:49 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:10.175 16:20:49 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:10.175 16:20:49 -- common/autotest_common.sh@1502 -- # grep 0000:d8:00.0/nvme/nvme 00:04:10.175 16:20:49 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:10.175 16:20:49 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:10.175 16:20:49 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:10.175 16:20:49 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:10.175 16:20:49 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:10.175 16:20:49 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:10.175 16:20:49 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:10.175 16:20:49 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:10.175 16:20:49 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:10.175 16:20:49 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:10.175 16:20:49 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:10.175 16:20:49 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:10.175 16:20:49 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:10.175 16:20:49 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:10.175 16:20:49 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:10.175 16:20:49 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:10.175 16:20:49 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:10.175 16:20:49 -- common/autotest_common.sh@1557 -- # continue 00:04:10.175 16:20:49 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:10.175 16:20:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:10.175 16:20:49 -- common/autotest_common.sh@10 -- # set +x 00:04:10.175 16:20:49 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:10.175 16:20:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:10.175 16:20:49 -- common/autotest_common.sh@10 -- # set +x 00:04:10.175 16:20:49 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:13.456 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:13.456 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:13.456 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:13.456 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:13.456 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:13.456 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:13.456 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:13.456 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:13.456 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:13.456 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:13.456 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:13.456 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:13.456 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:13.456 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:13.456 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:13.456 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:14.833 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:14.833 16:20:54 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:14.833 16:20:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:14.833 16:20:54 -- common/autotest_common.sh@10 -- # set +x 00:04:14.833 16:20:54 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:14.833 16:20:54 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:14.833 16:20:54 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:14.833 16:20:54 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:14.833 16:20:54 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:14.833 16:20:54 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:14.833 16:20:54 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:14.833 16:20:54 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:14.833 16:20:54 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:14.833 16:20:54 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:14.833 16:20:54 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:15.091 16:20:54 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:15.091 16:20:54 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:04:15.091 16:20:54 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:15.091 16:20:54 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:15.091 16:20:54 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:15.091 16:20:54 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:15.091 16:20:54 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:15.091 16:20:54 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:d8:00.0 00:04:15.091 16:20:54 -- common/autotest_common.sh@1592 -- # [[ -z 0000:d8:00.0 ]] 00:04:15.091 16:20:54 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1998317 00:04:15.091 16:20:54 -- common/autotest_common.sh@1598 -- # waitforlisten 1998317 00:04:15.091 16:20:54 -- common/autotest_common.sh@829 -- # '[' -z 1998317 ']' 00:04:15.091 16:20:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.091 16:20:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:15.091 16:20:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.091 16:20:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:15.091 16:20:54 -- common/autotest_common.sh@10 -- # set +x 00:04:15.091 16:20:54 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:15.091 [2024-07-15 16:20:54.552707] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:15.091 [2024-07-15 16:20:54.552785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1998317 ] 00:04:15.091 EAL: No free 2048 kB hugepages reported on node 1 00:04:15.091 [2024-07-15 16:20:54.622488] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.348 [2024-07-15 16:20:54.700951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.912 16:20:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:15.912 16:20:55 -- common/autotest_common.sh@862 -- # return 0 00:04:15.912 16:20:55 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:15.912 16:20:55 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:15.912 16:20:55 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:19.190 nvme0n1 00:04:19.190 16:20:58 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:19.190 [2024-07-15 16:20:58.501426] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:19.190 request: 00:04:19.190 { 00:04:19.190 "nvme_ctrlr_name": "nvme0", 00:04:19.190 "password": "test", 00:04:19.190 "method": "bdev_nvme_opal_revert", 00:04:19.190 "req_id": 1 00:04:19.190 } 00:04:19.190 Got JSON-RPC error response 00:04:19.190 response: 00:04:19.190 { 00:04:19.190 "code": -32602, 00:04:19.190 "message": "Invalid parameters" 00:04:19.190 } 00:04:19.190 16:20:58 -- common/autotest_common.sh@1604 -- # true 00:04:19.190 16:20:58 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:19.190 16:20:58 -- common/autotest_common.sh@1608 -- # killprocess 1998317 00:04:19.190 16:20:58 -- common/autotest_common.sh@948 -- # '[' -z 1998317 ']' 00:04:19.190 16:20:58 -- common/autotest_common.sh@952 -- # kill -0 1998317 00:04:19.190 16:20:58 -- common/autotest_common.sh@953 -- # uname 00:04:19.190 16:20:58 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:19.190 16:20:58 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1998317 00:04:19.190 16:20:58 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:19.190 16:20:58 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:19.190 16:20:58 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1998317' 00:04:19.190 killing process with pid 1998317 00:04:19.190 16:20:58 -- common/autotest_common.sh@967 -- # kill 1998317 00:04:19.190 16:20:58 -- common/autotest_common.sh@972 -- # wait 1998317 00:04:21.718 16:21:00 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:21.718 16:21:00 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:21.718 16:21:00 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:21.718 16:21:00 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:21.718 16:21:00 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:21.718 16:21:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:21.718 16:21:00 -- common/autotest_common.sh@10 -- # set +x 00:04:21.718 16:21:00 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:21.718 16:21:00 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:21.718 16:21:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.718 16:21:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.718 16:21:00 -- common/autotest_common.sh@10 -- # set +x 00:04:21.718 ************************************ 00:04:21.718 START TEST env 00:04:21.718 ************************************ 00:04:21.718 16:21:00 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:21.718 * Looking for test storage... 00:04:21.718 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:04:21.718 16:21:00 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:21.718 16:21:00 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.718 16:21:00 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.718 16:21:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.718 ************************************ 00:04:21.718 START TEST env_memory 00:04:21.718 ************************************ 00:04:21.718 16:21:00 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:21.718 00:04:21.718 00:04:21.718 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.718 http://cunit.sourceforge.net/ 00:04:21.718 00:04:21.718 00:04:21.718 Suite: memory 00:04:21.718 Test: alloc and free memory map ...[2024-07-15 16:21:00.900160] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:21.718 passed 00:04:21.718 Test: mem map translation ...[2024-07-15 16:21:00.914131] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:21.718 [2024-07-15 16:21:00.914147] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:21.718 [2024-07-15 16:21:00.914179] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:21.718 [2024-07-15 16:21:00.914188] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:21.718 passed 00:04:21.718 Test: mem map registration ...[2024-07-15 16:21:00.935906] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:21.718 [2024-07-15 16:21:00.935921] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:21.718 passed 00:04:21.718 Test: mem map adjacent registrations ...passed 00:04:21.718 00:04:21.718 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.718 suites 1 1 n/a 0 0 00:04:21.718 tests 4 4 4 0 0 00:04:21.718 asserts 152 152 152 0 n/a 00:04:21.718 00:04:21.718 Elapsed time = 0.088 seconds 00:04:21.718 00:04:21.718 real 0m0.101s 00:04:21.718 user 0m0.089s 00:04:21.718 sys 0m0.012s 00:04:21.718 16:21:00 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.718 16:21:00 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:21.718 ************************************ 00:04:21.718 END TEST env_memory 00:04:21.718 ************************************ 00:04:21.718 16:21:01 env -- common/autotest_common.sh@1142 -- # return 0 00:04:21.718 16:21:01 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:21.718 16:21:01 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.718 16:21:01 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.718 16:21:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.718 ************************************ 00:04:21.718 START TEST env_vtophys 00:04:21.718 ************************************ 00:04:21.718 16:21:01 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:21.718 EAL: lib.eal log level changed from notice to debug 00:04:21.718 EAL: Detected lcore 0 as core 0 on socket 0 00:04:21.718 EAL: Detected lcore 1 as core 1 on socket 0 00:04:21.718 EAL: Detected lcore 2 as core 2 on socket 0 00:04:21.718 EAL: Detected lcore 3 as core 3 on socket 0 00:04:21.718 EAL: Detected lcore 4 as core 4 on socket 0 00:04:21.718 EAL: Detected lcore 5 as core 5 on socket 0 00:04:21.718 EAL: Detected lcore 6 as core 6 on socket 0 00:04:21.718 EAL: Detected lcore 7 as core 8 on socket 0 00:04:21.718 EAL: Detected lcore 8 as core 9 on socket 0 00:04:21.718 EAL: Detected lcore 9 as core 10 on socket 0 00:04:21.718 EAL: Detected lcore 10 as core 11 on socket 0 00:04:21.718 EAL: Detected lcore 11 as core 12 on socket 0 00:04:21.718 EAL: Detected lcore 12 as core 13 on socket 0 00:04:21.718 EAL: Detected lcore 13 as core 14 on socket 0 00:04:21.718 EAL: Detected lcore 14 as core 16 on socket 0 00:04:21.718 EAL: Detected lcore 15 as core 17 on socket 0 00:04:21.718 EAL: Detected lcore 16 as core 18 on socket 0 00:04:21.718 EAL: Detected lcore 17 as core 19 on socket 0 00:04:21.718 EAL: Detected lcore 18 as core 20 on socket 0 00:04:21.718 EAL: Detected lcore 19 as core 21 on socket 0 00:04:21.718 EAL: Detected lcore 20 as core 22 on socket 0 00:04:21.718 EAL: Detected lcore 21 as core 24 on socket 0 00:04:21.718 EAL: Detected lcore 22 as core 25 on socket 0 00:04:21.718 EAL: Detected lcore 23 as core 26 on socket 0 00:04:21.718 EAL: Detected lcore 24 as core 27 on socket 0 00:04:21.718 EAL: Detected lcore 25 as core 28 on socket 0 00:04:21.718 EAL: Detected lcore 26 as core 29 on socket 0 00:04:21.718 EAL: Detected lcore 27 as core 30 on socket 0 00:04:21.718 EAL: Detected lcore 28 as core 0 on socket 1 00:04:21.718 EAL: Detected lcore 29 as core 1 on socket 1 00:04:21.718 EAL: Detected lcore 30 as core 2 on socket 1 00:04:21.718 EAL: Detected lcore 31 as core 3 on socket 1 00:04:21.718 EAL: Detected lcore 32 as core 4 on socket 1 00:04:21.718 EAL: Detected lcore 33 as core 5 on socket 1 00:04:21.718 EAL: Detected lcore 34 as core 6 on socket 1 00:04:21.718 EAL: Detected lcore 35 as core 8 on socket 1 00:04:21.718 EAL: Detected lcore 36 as core 9 on socket 1 00:04:21.718 EAL: Detected lcore 37 as core 10 on socket 1 00:04:21.718 EAL: Detected lcore 38 as core 11 on socket 1 00:04:21.718 EAL: Detected lcore 39 as core 12 on socket 1 00:04:21.718 EAL: Detected lcore 40 as core 13 on socket 1 00:04:21.718 EAL: Detected lcore 41 as core 14 on socket 1 00:04:21.718 EAL: Detected lcore 42 as core 16 on socket 1 00:04:21.718 EAL: Detected lcore 43 as core 17 on socket 1 00:04:21.718 EAL: Detected lcore 44 as core 18 on socket 1 00:04:21.718 EAL: Detected lcore 45 as core 19 on socket 1 00:04:21.718 EAL: Detected lcore 46 as core 20 on socket 1 00:04:21.718 EAL: Detected lcore 47 as core 21 on socket 1 00:04:21.718 EAL: Detected lcore 48 as core 22 on socket 1 00:04:21.718 EAL: Detected lcore 49 as core 24 on socket 1 00:04:21.718 EAL: Detected lcore 50 as core 25 on socket 1 00:04:21.718 EAL: Detected lcore 51 as core 26 on socket 1 00:04:21.718 EAL: Detected lcore 52 as core 27 on socket 1 00:04:21.718 EAL: Detected lcore 53 as core 28 on socket 1 00:04:21.718 EAL: Detected lcore 54 as core 29 on socket 1 00:04:21.718 EAL: Detected lcore 55 as core 30 on socket 1 00:04:21.718 EAL: Detected lcore 56 as core 0 on socket 0 00:04:21.718 EAL: Detected lcore 57 as core 1 on socket 0 00:04:21.718 EAL: Detected lcore 58 as core 2 on socket 0 00:04:21.718 EAL: Detected lcore 59 as core 3 on socket 0 00:04:21.718 EAL: Detected lcore 60 as core 4 on socket 0 00:04:21.718 EAL: Detected lcore 61 as core 5 on socket 0 00:04:21.718 EAL: Detected lcore 62 as core 6 on socket 0 00:04:21.718 EAL: Detected lcore 63 as core 8 on socket 0 00:04:21.718 EAL: Detected lcore 64 as core 9 on socket 0 00:04:21.718 EAL: Detected lcore 65 as core 10 on socket 0 00:04:21.718 EAL: Detected lcore 66 as core 11 on socket 0 00:04:21.718 EAL: Detected lcore 67 as core 12 on socket 0 00:04:21.718 EAL: Detected lcore 68 as core 13 on socket 0 00:04:21.719 EAL: Detected lcore 69 as core 14 on socket 0 00:04:21.719 EAL: Detected lcore 70 as core 16 on socket 0 00:04:21.719 EAL: Detected lcore 71 as core 17 on socket 0 00:04:21.719 EAL: Detected lcore 72 as core 18 on socket 0 00:04:21.719 EAL: Detected lcore 73 as core 19 on socket 0 00:04:21.719 EAL: Detected lcore 74 as core 20 on socket 0 00:04:21.719 EAL: Detected lcore 75 as core 21 on socket 0 00:04:21.719 EAL: Detected lcore 76 as core 22 on socket 0 00:04:21.719 EAL: Detected lcore 77 as core 24 on socket 0 00:04:21.719 EAL: Detected lcore 78 as core 25 on socket 0 00:04:21.719 EAL: Detected lcore 79 as core 26 on socket 0 00:04:21.719 EAL: Detected lcore 80 as core 27 on socket 0 00:04:21.719 EAL: Detected lcore 81 as core 28 on socket 0 00:04:21.719 EAL: Detected lcore 82 as core 29 on socket 0 00:04:21.719 EAL: Detected lcore 83 as core 30 on socket 0 00:04:21.719 EAL: Detected lcore 84 as core 0 on socket 1 00:04:21.719 EAL: Detected lcore 85 as core 1 on socket 1 00:04:21.719 EAL: Detected lcore 86 as core 2 on socket 1 00:04:21.719 EAL: Detected lcore 87 as core 3 on socket 1 00:04:21.719 EAL: Detected lcore 88 as core 4 on socket 1 00:04:21.719 EAL: Detected lcore 89 as core 5 on socket 1 00:04:21.719 EAL: Detected lcore 90 as core 6 on socket 1 00:04:21.719 EAL: Detected lcore 91 as core 8 on socket 1 00:04:21.719 EAL: Detected lcore 92 as core 9 on socket 1 00:04:21.719 EAL: Detected lcore 93 as core 10 on socket 1 00:04:21.719 EAL: Detected lcore 94 as core 11 on socket 1 00:04:21.719 EAL: Detected lcore 95 as core 12 on socket 1 00:04:21.719 EAL: Detected lcore 96 as core 13 on socket 1 00:04:21.719 EAL: Detected lcore 97 as core 14 on socket 1 00:04:21.719 EAL: Detected lcore 98 as core 16 on socket 1 00:04:21.719 EAL: Detected lcore 99 as core 17 on socket 1 00:04:21.719 EAL: Detected lcore 100 as core 18 on socket 1 00:04:21.719 EAL: Detected lcore 101 as core 19 on socket 1 00:04:21.719 EAL: Detected lcore 102 as core 20 on socket 1 00:04:21.719 EAL: Detected lcore 103 as core 21 on socket 1 00:04:21.719 EAL: Detected lcore 104 as core 22 on socket 1 00:04:21.719 EAL: Detected lcore 105 as core 24 on socket 1 00:04:21.719 EAL: Detected lcore 106 as core 25 on socket 1 00:04:21.719 EAL: Detected lcore 107 as core 26 on socket 1 00:04:21.719 EAL: Detected lcore 108 as core 27 on socket 1 00:04:21.719 EAL: Detected lcore 109 as core 28 on socket 1 00:04:21.719 EAL: Detected lcore 110 as core 29 on socket 1 00:04:21.719 EAL: Detected lcore 111 as core 30 on socket 1 00:04:21.719 EAL: Maximum logical cores by configuration: 128 00:04:21.719 EAL: Detected CPU lcores: 112 00:04:21.719 EAL: Detected NUMA nodes: 2 00:04:21.719 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:21.719 EAL: Checking presence of .so 'librte_eal.so.24' 00:04:21.719 EAL: Checking presence of .so 'librte_eal.so' 00:04:21.719 EAL: Detected static linkage of DPDK 00:04:21.719 EAL: No shared files mode enabled, IPC will be disabled 00:04:21.719 EAL: Bus pci wants IOVA as 'DC' 00:04:21.719 EAL: Buses did not request a specific IOVA mode. 00:04:21.719 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:21.719 EAL: Selected IOVA mode 'VA' 00:04:21.719 EAL: No free 2048 kB hugepages reported on node 1 00:04:21.719 EAL: Probing VFIO support... 00:04:21.719 EAL: IOMMU type 1 (Type 1) is supported 00:04:21.719 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:21.719 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:21.719 EAL: VFIO support initialized 00:04:21.719 EAL: Ask a virtual area of 0x2e000 bytes 00:04:21.719 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:21.719 EAL: Setting up physically contiguous memory... 00:04:21.719 EAL: Setting maximum number of open files to 524288 00:04:21.719 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:21.719 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:21.719 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:21.719 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.719 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:21.719 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:21.719 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.719 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:21.719 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:21.719 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.719 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:21.719 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:21.719 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.719 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:21.719 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:21.719 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.719 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:21.719 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:21.719 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.719 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:21.719 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:21.719 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.719 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:21.719 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:21.719 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.719 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:21.719 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:21.719 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:21.719 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.719 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:21.719 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:21.719 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.719 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:21.719 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:21.719 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.719 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:21.719 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:21.719 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.719 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:21.719 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:21.719 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.719 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:21.719 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:21.719 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.719 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:21.719 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:21.719 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.719 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:21.719 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:21.719 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.719 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:21.719 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:21.719 EAL: Hugepages will be freed exactly as allocated. 00:04:21.719 EAL: No shared files mode enabled, IPC is disabled 00:04:21.719 EAL: No shared files mode enabled, IPC is disabled 00:04:21.719 EAL: TSC frequency is ~2500000 KHz 00:04:21.719 EAL: Main lcore 0 is ready (tid=7fafac845a00;cpuset=[0]) 00:04:21.719 EAL: Trying to obtain current memory policy. 00:04:21.719 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.719 EAL: Restoring previous memory policy: 0 00:04:21.719 EAL: request: mp_malloc_sync 00:04:21.719 EAL: No shared files mode enabled, IPC is disabled 00:04:21.719 EAL: Heap on socket 0 was expanded by 2MB 00:04:21.719 EAL: No shared files mode enabled, IPC is disabled 00:04:21.719 EAL: Mem event callback 'spdk:(nil)' registered 00:04:21.719 00:04:21.719 00:04:21.719 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.719 http://cunit.sourceforge.net/ 00:04:21.719 00:04:21.719 00:04:21.719 Suite: components_suite 00:04:21.719 Test: vtophys_malloc_test ...passed 00:04:21.719 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:21.719 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.719 EAL: Restoring previous memory policy: 4 00:04:21.719 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.719 EAL: request: mp_malloc_sync 00:04:21.719 EAL: No shared files mode enabled, IPC is disabled 00:04:21.719 EAL: Heap on socket 0 was expanded by 4MB 00:04:21.719 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.719 EAL: request: mp_malloc_sync 00:04:21.719 EAL: No shared files mode enabled, IPC is disabled 00:04:21.719 EAL: Heap on socket 0 was shrunk by 4MB 00:04:21.719 EAL: Trying to obtain current memory policy. 00:04:21.719 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.719 EAL: Restoring previous memory policy: 4 00:04:21.719 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.719 EAL: request: mp_malloc_sync 00:04:21.719 EAL: No shared files mode enabled, IPC is disabled 00:04:21.719 EAL: Heap on socket 0 was expanded by 6MB 00:04:21.719 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.719 EAL: request: mp_malloc_sync 00:04:21.719 EAL: No shared files mode enabled, IPC is disabled 00:04:21.719 EAL: Heap on socket 0 was shrunk by 6MB 00:04:21.719 EAL: Trying to obtain current memory policy. 00:04:21.719 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.719 EAL: Restoring previous memory policy: 4 00:04:21.719 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.719 EAL: request: mp_malloc_sync 00:04:21.719 EAL: No shared files mode enabled, IPC is disabled 00:04:21.719 EAL: Heap on socket 0 was expanded by 10MB 00:04:21.719 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.719 EAL: request: mp_malloc_sync 00:04:21.719 EAL: No shared files mode enabled, IPC is disabled 00:04:21.719 EAL: Heap on socket 0 was shrunk by 10MB 00:04:21.719 EAL: Trying to obtain current memory policy. 00:04:21.719 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.719 EAL: Restoring previous memory policy: 4 00:04:21.719 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.719 EAL: request: mp_malloc_sync 00:04:21.719 EAL: No shared files mode enabled, IPC is disabled 00:04:21.719 EAL: Heap on socket 0 was expanded by 18MB 00:04:21.719 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.719 EAL: request: mp_malloc_sync 00:04:21.719 EAL: No shared files mode enabled, IPC is disabled 00:04:21.719 EAL: Heap on socket 0 was shrunk by 18MB 00:04:21.719 EAL: Trying to obtain current memory policy. 00:04:21.719 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.719 EAL: Restoring previous memory policy: 4 00:04:21.719 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.719 EAL: request: mp_malloc_sync 00:04:21.719 EAL: No shared files mode enabled, IPC is disabled 00:04:21.719 EAL: Heap on socket 0 was expanded by 34MB 00:04:21.719 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.719 EAL: request: mp_malloc_sync 00:04:21.719 EAL: No shared files mode enabled, IPC is disabled 00:04:21.719 EAL: Heap on socket 0 was shrunk by 34MB 00:04:21.719 EAL: Trying to obtain current memory policy. 00:04:21.719 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.720 EAL: Restoring previous memory policy: 4 00:04:21.720 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.720 EAL: request: mp_malloc_sync 00:04:21.720 EAL: No shared files mode enabled, IPC is disabled 00:04:21.720 EAL: Heap on socket 0 was expanded by 66MB 00:04:21.720 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.720 EAL: request: mp_malloc_sync 00:04:21.720 EAL: No shared files mode enabled, IPC is disabled 00:04:21.720 EAL: Heap on socket 0 was shrunk by 66MB 00:04:21.720 EAL: Trying to obtain current memory policy. 00:04:21.720 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.720 EAL: Restoring previous memory policy: 4 00:04:21.720 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.720 EAL: request: mp_malloc_sync 00:04:21.720 EAL: No shared files mode enabled, IPC is disabled 00:04:21.720 EAL: Heap on socket 0 was expanded by 130MB 00:04:21.720 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.720 EAL: request: mp_malloc_sync 00:04:21.720 EAL: No shared files mode enabled, IPC is disabled 00:04:21.720 EAL: Heap on socket 0 was shrunk by 130MB 00:04:21.720 EAL: Trying to obtain current memory policy. 00:04:21.720 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.720 EAL: Restoring previous memory policy: 4 00:04:21.720 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.720 EAL: request: mp_malloc_sync 00:04:21.720 EAL: No shared files mode enabled, IPC is disabled 00:04:21.720 EAL: Heap on socket 0 was expanded by 258MB 00:04:21.977 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.977 EAL: request: mp_malloc_sync 00:04:21.977 EAL: No shared files mode enabled, IPC is disabled 00:04:21.977 EAL: Heap on socket 0 was shrunk by 258MB 00:04:21.977 EAL: Trying to obtain current memory policy. 00:04:21.977 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.977 EAL: Restoring previous memory policy: 4 00:04:21.977 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.977 EAL: request: mp_malloc_sync 00:04:21.977 EAL: No shared files mode enabled, IPC is disabled 00:04:21.977 EAL: Heap on socket 0 was expanded by 514MB 00:04:21.977 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.234 EAL: request: mp_malloc_sync 00:04:22.234 EAL: No shared files mode enabled, IPC is disabled 00:04:22.234 EAL: Heap on socket 0 was shrunk by 514MB 00:04:22.234 EAL: Trying to obtain current memory policy. 00:04:22.234 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.234 EAL: Restoring previous memory policy: 4 00:04:22.234 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.234 EAL: request: mp_malloc_sync 00:04:22.234 EAL: No shared files mode enabled, IPC is disabled 00:04:22.234 EAL: Heap on socket 0 was expanded by 1026MB 00:04:22.492 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.750 EAL: request: mp_malloc_sync 00:04:22.750 EAL: No shared files mode enabled, IPC is disabled 00:04:22.750 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:22.750 passed 00:04:22.750 00:04:22.750 Run Summary: Type Total Ran Passed Failed Inactive 00:04:22.750 suites 1 1 n/a 0 0 00:04:22.750 tests 2 2 2 0 0 00:04:22.750 asserts 497 497 497 0 n/a 00:04:22.750 00:04:22.750 Elapsed time = 0.959 seconds 00:04:22.750 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.750 EAL: request: mp_malloc_sync 00:04:22.750 EAL: No shared files mode enabled, IPC is disabled 00:04:22.750 EAL: Heap on socket 0 was shrunk by 2MB 00:04:22.750 EAL: No shared files mode enabled, IPC is disabled 00:04:22.750 EAL: No shared files mode enabled, IPC is disabled 00:04:22.750 EAL: No shared files mode enabled, IPC is disabled 00:04:22.750 00:04:22.750 real 0m1.081s 00:04:22.750 user 0m0.621s 00:04:22.750 sys 0m0.432s 00:04:22.750 16:21:02 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.750 16:21:02 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:22.750 ************************************ 00:04:22.750 END TEST env_vtophys 00:04:22.750 ************************************ 00:04:22.750 16:21:02 env -- common/autotest_common.sh@1142 -- # return 0 00:04:22.750 16:21:02 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:22.750 16:21:02 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.750 16:21:02 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.750 16:21:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.750 ************************************ 00:04:22.750 START TEST env_pci 00:04:22.750 ************************************ 00:04:22.750 16:21:02 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:22.750 00:04:22.750 00:04:22.750 CUnit - A unit testing framework for C - Version 2.1-3 00:04:22.750 http://cunit.sourceforge.net/ 00:04:22.750 00:04:22.750 00:04:22.750 Suite: pci 00:04:22.750 Test: pci_hook ...[2024-07-15 16:21:02.221309] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1999869 has claimed it 00:04:22.750 EAL: Cannot find device (10000:00:01.0) 00:04:22.750 EAL: Failed to attach device on primary process 00:04:22.750 passed 00:04:22.750 00:04:22.750 Run Summary: Type Total Ran Passed Failed Inactive 00:04:22.750 suites 1 1 n/a 0 0 00:04:22.750 tests 1 1 1 0 0 00:04:22.750 asserts 25 25 25 0 n/a 00:04:22.750 00:04:22.750 Elapsed time = 0.034 seconds 00:04:22.750 00:04:22.750 real 0m0.054s 00:04:22.750 user 0m0.009s 00:04:22.750 sys 0m0.044s 00:04:22.750 16:21:02 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.750 16:21:02 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:22.750 ************************************ 00:04:22.750 END TEST env_pci 00:04:22.750 ************************************ 00:04:22.750 16:21:02 env -- common/autotest_common.sh@1142 -- # return 0 00:04:22.750 16:21:02 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:22.750 16:21:02 env -- env/env.sh@15 -- # uname 00:04:22.750 16:21:02 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:22.750 16:21:02 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:22.750 16:21:02 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:22.750 16:21:02 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:22.750 16:21:02 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.750 16:21:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.751 ************************************ 00:04:22.751 START TEST env_dpdk_post_init 00:04:22.751 ************************************ 00:04:22.751 16:21:02 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:23.025 EAL: Detected CPU lcores: 112 00:04:23.025 EAL: Detected NUMA nodes: 2 00:04:23.025 EAL: Detected static linkage of DPDK 00:04:23.025 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:23.025 EAL: Selected IOVA mode 'VA' 00:04:23.025 EAL: No free 2048 kB hugepages reported on node 1 00:04:23.025 EAL: VFIO support initialized 00:04:23.025 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:23.025 EAL: Using IOMMU type 1 (Type 1) 00:04:23.649 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:27.845 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:27.845 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001000000 00:04:27.845 Starting DPDK initialization... 00:04:27.845 Starting SPDK post initialization... 00:04:27.845 SPDK NVMe probe 00:04:27.845 Attaching to 0000:d8:00.0 00:04:27.845 Attached to 0000:d8:00.0 00:04:27.845 Cleaning up... 00:04:27.845 00:04:27.845 real 0m4.751s 00:04:27.845 user 0m3.572s 00:04:27.845 sys 0m0.425s 00:04:27.845 16:21:07 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.845 16:21:07 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:27.845 ************************************ 00:04:27.845 END TEST env_dpdk_post_init 00:04:27.845 ************************************ 00:04:27.845 16:21:07 env -- common/autotest_common.sh@1142 -- # return 0 00:04:27.845 16:21:07 env -- env/env.sh@26 -- # uname 00:04:27.845 16:21:07 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:27.845 16:21:07 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:27.845 16:21:07 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.845 16:21:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.845 16:21:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.845 ************************************ 00:04:27.845 START TEST env_mem_callbacks 00:04:27.845 ************************************ 00:04:27.845 16:21:07 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:27.845 EAL: Detected CPU lcores: 112 00:04:27.845 EAL: Detected NUMA nodes: 2 00:04:27.845 EAL: Detected static linkage of DPDK 00:04:27.845 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:27.845 EAL: Selected IOVA mode 'VA' 00:04:27.845 EAL: No free 2048 kB hugepages reported on node 1 00:04:27.845 EAL: VFIO support initialized 00:04:27.845 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:27.845 00:04:27.845 00:04:27.845 CUnit - A unit testing framework for C - Version 2.1-3 00:04:27.845 http://cunit.sourceforge.net/ 00:04:27.845 00:04:27.845 00:04:27.845 Suite: memory 00:04:27.845 Test: test ... 00:04:27.846 register 0x200000200000 2097152 00:04:27.846 malloc 3145728 00:04:27.846 register 0x200000400000 4194304 00:04:27.846 buf 0x200000500000 len 3145728 PASSED 00:04:27.846 malloc 64 00:04:27.846 buf 0x2000004fff40 len 64 PASSED 00:04:27.846 malloc 4194304 00:04:27.846 register 0x200000800000 6291456 00:04:27.846 buf 0x200000a00000 len 4194304 PASSED 00:04:27.846 free 0x200000500000 3145728 00:04:27.846 free 0x2000004fff40 64 00:04:27.846 unregister 0x200000400000 4194304 PASSED 00:04:27.846 free 0x200000a00000 4194304 00:04:27.846 unregister 0x200000800000 6291456 PASSED 00:04:27.846 malloc 8388608 00:04:27.846 register 0x200000400000 10485760 00:04:27.846 buf 0x200000600000 len 8388608 PASSED 00:04:27.846 free 0x200000600000 8388608 00:04:27.846 unregister 0x200000400000 10485760 PASSED 00:04:27.846 passed 00:04:27.846 00:04:27.846 Run Summary: Type Total Ran Passed Failed Inactive 00:04:27.846 suites 1 1 n/a 0 0 00:04:27.846 tests 1 1 1 0 0 00:04:27.846 asserts 15 15 15 0 n/a 00:04:27.846 00:04:27.846 Elapsed time = 0.005 seconds 00:04:27.846 00:04:27.846 real 0m0.065s 00:04:27.846 user 0m0.018s 00:04:27.846 sys 0m0.047s 00:04:27.846 16:21:07 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.846 16:21:07 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:27.846 ************************************ 00:04:27.846 END TEST env_mem_callbacks 00:04:27.846 ************************************ 00:04:27.846 16:21:07 env -- common/autotest_common.sh@1142 -- # return 0 00:04:27.846 00:04:27.846 real 0m6.553s 00:04:27.846 user 0m4.499s 00:04:27.846 sys 0m1.308s 00:04:27.846 16:21:07 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.846 16:21:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.846 ************************************ 00:04:27.846 END TEST env 00:04:27.846 ************************************ 00:04:27.846 16:21:07 -- common/autotest_common.sh@1142 -- # return 0 00:04:27.846 16:21:07 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:27.846 16:21:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.846 16:21:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.846 16:21:07 -- common/autotest_common.sh@10 -- # set +x 00:04:27.846 ************************************ 00:04:27.846 START TEST rpc 00:04:27.846 ************************************ 00:04:27.846 16:21:07 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:28.104 * Looking for test storage... 00:04:28.104 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:28.104 16:21:07 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2001347 00:04:28.104 16:21:07 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:28.104 16:21:07 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:28.104 16:21:07 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2001347 00:04:28.104 16:21:07 rpc -- common/autotest_common.sh@829 -- # '[' -z 2001347 ']' 00:04:28.104 16:21:07 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.104 16:21:07 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:28.104 16:21:07 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.104 16:21:07 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:28.104 16:21:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.104 [2024-07-15 16:21:07.505665] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:28.104 [2024-07-15 16:21:07.505735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2001347 ] 00:04:28.104 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.104 [2024-07-15 16:21:07.574355] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.104 [2024-07-15 16:21:07.650280] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:28.104 [2024-07-15 16:21:07.650320] app.c: 607:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2001347' to capture a snapshot of events at runtime. 00:04:28.104 [2024-07-15 16:21:07.650330] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:28.104 [2024-07-15 16:21:07.650339] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:28.104 [2024-07-15 16:21:07.650346] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2001347 for offline analysis/debug. 00:04:28.104 [2024-07-15 16:21:07.650371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.037 16:21:08 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:29.037 16:21:08 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:29.037 16:21:08 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:29.037 16:21:08 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:29.037 16:21:08 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:29.037 16:21:08 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:29.037 16:21:08 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.037 16:21:08 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.037 16:21:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.037 ************************************ 00:04:29.037 START TEST rpc_integrity 00:04:29.037 ************************************ 00:04:29.037 16:21:08 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:29.038 16:21:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:29.038 16:21:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.038 16:21:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.038 16:21:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.038 16:21:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:29.038 16:21:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:29.038 16:21:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:29.038 16:21:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:29.038 16:21:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.038 16:21:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.038 16:21:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.038 16:21:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:29.038 16:21:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:29.038 16:21:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.038 16:21:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.038 16:21:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.038 16:21:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:29.038 { 00:04:29.038 "name": "Malloc0", 00:04:29.038 "aliases": [ 00:04:29.038 "7c257032-12a1-4883-a136-a04a46c497f9" 00:04:29.038 ], 00:04:29.038 "product_name": "Malloc disk", 00:04:29.038 "block_size": 512, 00:04:29.038 "num_blocks": 16384, 00:04:29.038 "uuid": "7c257032-12a1-4883-a136-a04a46c497f9", 00:04:29.038 "assigned_rate_limits": { 00:04:29.038 "rw_ios_per_sec": 0, 00:04:29.038 "rw_mbytes_per_sec": 0, 00:04:29.038 "r_mbytes_per_sec": 0, 00:04:29.038 "w_mbytes_per_sec": 0 00:04:29.038 }, 00:04:29.038 "claimed": false, 00:04:29.038 "zoned": false, 00:04:29.038 "supported_io_types": { 00:04:29.038 "read": true, 00:04:29.038 "write": true, 00:04:29.038 "unmap": true, 00:04:29.038 "flush": true, 00:04:29.038 "reset": true, 00:04:29.038 "nvme_admin": false, 00:04:29.038 "nvme_io": false, 00:04:29.038 "nvme_io_md": false, 00:04:29.038 "write_zeroes": true, 00:04:29.038 "zcopy": true, 00:04:29.038 "get_zone_info": false, 00:04:29.038 "zone_management": false, 00:04:29.038 "zone_append": false, 00:04:29.038 "compare": false, 00:04:29.038 "compare_and_write": false, 00:04:29.038 "abort": true, 00:04:29.038 "seek_hole": false, 00:04:29.038 "seek_data": false, 00:04:29.038 "copy": true, 00:04:29.038 "nvme_iov_md": false 00:04:29.038 }, 00:04:29.038 "memory_domains": [ 00:04:29.038 { 00:04:29.038 "dma_device_id": "system", 00:04:29.038 "dma_device_type": 1 00:04:29.038 }, 00:04:29.038 { 00:04:29.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.038 "dma_device_type": 2 00:04:29.038 } 00:04:29.038 ], 00:04:29.038 "driver_specific": {} 00:04:29.038 } 00:04:29.038 ]' 00:04:29.038 16:21:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:29.038 16:21:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:29.038 16:21:08 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:29.038 16:21:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.038 16:21:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.038 [2024-07-15 16:21:08.498756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:29.038 [2024-07-15 16:21:08.498790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:29.038 [2024-07-15 16:21:08.498808] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x48c7260 00:04:29.038 [2024-07-15 16:21:08.498817] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:29.038 [2024-07-15 16:21:08.499628] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:29.038 [2024-07-15 16:21:08.499651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:29.038 Passthru0 00:04:29.038 16:21:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.038 16:21:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:29.038 16:21:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.038 16:21:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.038 16:21:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.038 16:21:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:29.038 { 00:04:29.038 "name": "Malloc0", 00:04:29.038 "aliases": [ 00:04:29.038 "7c257032-12a1-4883-a136-a04a46c497f9" 00:04:29.038 ], 00:04:29.038 "product_name": "Malloc disk", 00:04:29.038 "block_size": 512, 00:04:29.038 "num_blocks": 16384, 00:04:29.038 "uuid": "7c257032-12a1-4883-a136-a04a46c497f9", 00:04:29.038 "assigned_rate_limits": { 00:04:29.038 "rw_ios_per_sec": 0, 00:04:29.038 "rw_mbytes_per_sec": 0, 00:04:29.038 "r_mbytes_per_sec": 0, 00:04:29.038 "w_mbytes_per_sec": 0 00:04:29.038 }, 00:04:29.038 "claimed": true, 00:04:29.038 "claim_type": "exclusive_write", 00:04:29.038 "zoned": false, 00:04:29.038 "supported_io_types": { 00:04:29.038 "read": true, 00:04:29.038 "write": true, 00:04:29.038 "unmap": true, 00:04:29.038 "flush": true, 00:04:29.038 "reset": true, 00:04:29.038 "nvme_admin": false, 00:04:29.038 "nvme_io": false, 00:04:29.038 "nvme_io_md": false, 00:04:29.038 "write_zeroes": true, 00:04:29.038 "zcopy": true, 00:04:29.038 "get_zone_info": false, 00:04:29.038 "zone_management": false, 00:04:29.038 "zone_append": false, 00:04:29.038 "compare": false, 00:04:29.038 "compare_and_write": false, 00:04:29.038 "abort": true, 00:04:29.038 "seek_hole": false, 00:04:29.038 "seek_data": false, 00:04:29.038 "copy": true, 00:04:29.038 "nvme_iov_md": false 00:04:29.038 }, 00:04:29.038 "memory_domains": [ 00:04:29.038 { 00:04:29.038 "dma_device_id": "system", 00:04:29.038 "dma_device_type": 1 00:04:29.038 }, 00:04:29.038 { 00:04:29.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.038 "dma_device_type": 2 00:04:29.038 } 00:04:29.038 ], 00:04:29.038 "driver_specific": {} 00:04:29.038 }, 00:04:29.038 { 00:04:29.038 "name": "Passthru0", 00:04:29.038 "aliases": [ 00:04:29.038 "13deb241-4656-58c2-a16d-486b363d6c81" 00:04:29.038 ], 00:04:29.038 "product_name": "passthru", 00:04:29.038 "block_size": 512, 00:04:29.038 "num_blocks": 16384, 00:04:29.038 "uuid": "13deb241-4656-58c2-a16d-486b363d6c81", 00:04:29.038 "assigned_rate_limits": { 00:04:29.038 "rw_ios_per_sec": 0, 00:04:29.038 "rw_mbytes_per_sec": 0, 00:04:29.038 "r_mbytes_per_sec": 0, 00:04:29.038 "w_mbytes_per_sec": 0 00:04:29.038 }, 00:04:29.038 "claimed": false, 00:04:29.038 "zoned": false, 00:04:29.038 "supported_io_types": { 00:04:29.038 "read": true, 00:04:29.038 "write": true, 00:04:29.038 "unmap": true, 00:04:29.038 "flush": true, 00:04:29.038 "reset": true, 00:04:29.038 "nvme_admin": false, 00:04:29.038 "nvme_io": false, 00:04:29.038 "nvme_io_md": false, 00:04:29.038 "write_zeroes": true, 00:04:29.038 "zcopy": true, 00:04:29.038 "get_zone_info": false, 00:04:29.038 "zone_management": false, 00:04:29.038 "zone_append": false, 00:04:29.038 "compare": false, 00:04:29.038 "compare_and_write": false, 00:04:29.038 "abort": true, 00:04:29.038 "seek_hole": false, 00:04:29.038 "seek_data": false, 00:04:29.038 "copy": true, 00:04:29.038 "nvme_iov_md": false 00:04:29.038 }, 00:04:29.038 "memory_domains": [ 00:04:29.038 { 00:04:29.038 "dma_device_id": "system", 00:04:29.038 "dma_device_type": 1 00:04:29.038 }, 00:04:29.038 { 00:04:29.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.038 "dma_device_type": 2 00:04:29.038 } 00:04:29.038 ], 00:04:29.038 "driver_specific": { 00:04:29.038 "passthru": { 00:04:29.038 "name": "Passthru0", 00:04:29.038 "base_bdev_name": "Malloc0" 00:04:29.038 } 00:04:29.038 } 00:04:29.038 } 00:04:29.038 ]' 00:04:29.038 16:21:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:29.038 16:21:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:29.038 16:21:08 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:29.038 16:21:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.038 16:21:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.038 16:21:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.038 16:21:08 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:29.038 16:21:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.038 16:21:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.038 16:21:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.038 16:21:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:29.038 16:21:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.038 16:21:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.038 16:21:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.038 16:21:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:29.038 16:21:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:29.296 16:21:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:29.296 00:04:29.296 real 0m0.268s 00:04:29.296 user 0m0.170s 00:04:29.296 sys 0m0.043s 00:04:29.296 16:21:08 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.296 16:21:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.296 ************************************ 00:04:29.296 END TEST rpc_integrity 00:04:29.296 ************************************ 00:04:29.296 16:21:08 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:29.296 16:21:08 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:29.296 16:21:08 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.296 16:21:08 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.296 16:21:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.296 ************************************ 00:04:29.296 START TEST rpc_plugins 00:04:29.296 ************************************ 00:04:29.296 16:21:08 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:29.296 16:21:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:29.296 16:21:08 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.296 16:21:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:29.296 16:21:08 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.296 16:21:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:29.296 16:21:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:29.296 16:21:08 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.296 16:21:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:29.296 16:21:08 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.296 16:21:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:29.296 { 00:04:29.296 "name": "Malloc1", 00:04:29.296 "aliases": [ 00:04:29.296 "f50a4706-64bb-4faa-aaa3-80d7c41d864f" 00:04:29.296 ], 00:04:29.296 "product_name": "Malloc disk", 00:04:29.296 "block_size": 4096, 00:04:29.296 "num_blocks": 256, 00:04:29.296 "uuid": "f50a4706-64bb-4faa-aaa3-80d7c41d864f", 00:04:29.296 "assigned_rate_limits": { 00:04:29.296 "rw_ios_per_sec": 0, 00:04:29.296 "rw_mbytes_per_sec": 0, 00:04:29.296 "r_mbytes_per_sec": 0, 00:04:29.296 "w_mbytes_per_sec": 0 00:04:29.296 }, 00:04:29.296 "claimed": false, 00:04:29.296 "zoned": false, 00:04:29.296 "supported_io_types": { 00:04:29.296 "read": true, 00:04:29.296 "write": true, 00:04:29.296 "unmap": true, 00:04:29.296 "flush": true, 00:04:29.296 "reset": true, 00:04:29.296 "nvme_admin": false, 00:04:29.296 "nvme_io": false, 00:04:29.296 "nvme_io_md": false, 00:04:29.296 "write_zeroes": true, 00:04:29.296 "zcopy": true, 00:04:29.296 "get_zone_info": false, 00:04:29.296 "zone_management": false, 00:04:29.296 "zone_append": false, 00:04:29.296 "compare": false, 00:04:29.296 "compare_and_write": false, 00:04:29.296 "abort": true, 00:04:29.296 "seek_hole": false, 00:04:29.296 "seek_data": false, 00:04:29.296 "copy": true, 00:04:29.296 "nvme_iov_md": false 00:04:29.296 }, 00:04:29.296 "memory_domains": [ 00:04:29.296 { 00:04:29.296 "dma_device_id": "system", 00:04:29.296 "dma_device_type": 1 00:04:29.296 }, 00:04:29.296 { 00:04:29.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.296 "dma_device_type": 2 00:04:29.296 } 00:04:29.296 ], 00:04:29.296 "driver_specific": {} 00:04:29.296 } 00:04:29.296 ]' 00:04:29.296 16:21:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:29.296 16:21:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:29.296 16:21:08 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:29.296 16:21:08 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.296 16:21:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:29.296 16:21:08 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.296 16:21:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:29.296 16:21:08 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.296 16:21:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:29.296 16:21:08 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.296 16:21:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:29.296 16:21:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:29.296 16:21:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:29.296 00:04:29.296 real 0m0.149s 00:04:29.296 user 0m0.091s 00:04:29.296 sys 0m0.023s 00:04:29.296 16:21:08 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.296 16:21:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:29.296 ************************************ 00:04:29.296 END TEST rpc_plugins 00:04:29.296 ************************************ 00:04:29.554 16:21:08 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:29.554 16:21:08 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:29.554 16:21:08 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.554 16:21:08 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.554 16:21:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.554 ************************************ 00:04:29.554 START TEST rpc_trace_cmd_test 00:04:29.554 ************************************ 00:04:29.554 16:21:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:29.554 16:21:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:29.554 16:21:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:29.554 16:21:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.554 16:21:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:29.554 16:21:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.554 16:21:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:29.554 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2001347", 00:04:29.554 "tpoint_group_mask": "0x8", 00:04:29.554 "iscsi_conn": { 00:04:29.554 "mask": "0x2", 00:04:29.554 "tpoint_mask": "0x0" 00:04:29.554 }, 00:04:29.554 "scsi": { 00:04:29.554 "mask": "0x4", 00:04:29.554 "tpoint_mask": "0x0" 00:04:29.554 }, 00:04:29.554 "bdev": { 00:04:29.554 "mask": "0x8", 00:04:29.554 "tpoint_mask": "0xffffffffffffffff" 00:04:29.554 }, 00:04:29.554 "nvmf_rdma": { 00:04:29.554 "mask": "0x10", 00:04:29.554 "tpoint_mask": "0x0" 00:04:29.554 }, 00:04:29.554 "nvmf_tcp": { 00:04:29.554 "mask": "0x20", 00:04:29.554 "tpoint_mask": "0x0" 00:04:29.554 }, 00:04:29.554 "ftl": { 00:04:29.554 "mask": "0x40", 00:04:29.554 "tpoint_mask": "0x0" 00:04:29.554 }, 00:04:29.554 "blobfs": { 00:04:29.554 "mask": "0x80", 00:04:29.554 "tpoint_mask": "0x0" 00:04:29.554 }, 00:04:29.554 "dsa": { 00:04:29.554 "mask": "0x200", 00:04:29.554 "tpoint_mask": "0x0" 00:04:29.554 }, 00:04:29.554 "thread": { 00:04:29.554 "mask": "0x400", 00:04:29.554 "tpoint_mask": "0x0" 00:04:29.554 }, 00:04:29.554 "nvme_pcie": { 00:04:29.554 "mask": "0x800", 00:04:29.554 "tpoint_mask": "0x0" 00:04:29.554 }, 00:04:29.554 "iaa": { 00:04:29.554 "mask": "0x1000", 00:04:29.554 "tpoint_mask": "0x0" 00:04:29.554 }, 00:04:29.554 "nvme_tcp": { 00:04:29.554 "mask": "0x2000", 00:04:29.554 "tpoint_mask": "0x0" 00:04:29.554 }, 00:04:29.554 "bdev_nvme": { 00:04:29.554 "mask": "0x4000", 00:04:29.554 "tpoint_mask": "0x0" 00:04:29.554 }, 00:04:29.554 "sock": { 00:04:29.554 "mask": "0x8000", 00:04:29.554 "tpoint_mask": "0x0" 00:04:29.554 } 00:04:29.554 }' 00:04:29.554 16:21:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:29.554 16:21:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:29.554 16:21:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:29.554 16:21:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:29.554 16:21:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:29.554 16:21:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:29.554 16:21:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:29.554 16:21:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:29.554 16:21:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:29.554 16:21:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:29.554 00:04:29.554 real 0m0.182s 00:04:29.554 user 0m0.148s 00:04:29.554 sys 0m0.026s 00:04:29.554 16:21:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.554 16:21:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:29.554 ************************************ 00:04:29.554 END TEST rpc_trace_cmd_test 00:04:29.554 ************************************ 00:04:29.812 16:21:09 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:29.812 16:21:09 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:29.812 16:21:09 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:29.812 16:21:09 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:29.812 16:21:09 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.812 16:21:09 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.812 16:21:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.812 ************************************ 00:04:29.812 START TEST rpc_daemon_integrity 00:04:29.812 ************************************ 00:04:29.812 16:21:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:29.812 16:21:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:29.812 16:21:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.812 16:21:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.812 16:21:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.812 16:21:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:29.812 16:21:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:29.812 16:21:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:29.812 16:21:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:29.812 16:21:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.812 16:21:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.812 16:21:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.812 16:21:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:29.812 16:21:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:29.812 16:21:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.812 16:21:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.812 16:21:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.812 16:21:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:29.812 { 00:04:29.812 "name": "Malloc2", 00:04:29.812 "aliases": [ 00:04:29.812 "ddd3375f-47dd-4f10-adde-e4d82ec409e2" 00:04:29.812 ], 00:04:29.812 "product_name": "Malloc disk", 00:04:29.812 "block_size": 512, 00:04:29.812 "num_blocks": 16384, 00:04:29.812 "uuid": "ddd3375f-47dd-4f10-adde-e4d82ec409e2", 00:04:29.812 "assigned_rate_limits": { 00:04:29.812 "rw_ios_per_sec": 0, 00:04:29.812 "rw_mbytes_per_sec": 0, 00:04:29.812 "r_mbytes_per_sec": 0, 00:04:29.812 "w_mbytes_per_sec": 0 00:04:29.812 }, 00:04:29.812 "claimed": false, 00:04:29.812 "zoned": false, 00:04:29.812 "supported_io_types": { 00:04:29.812 "read": true, 00:04:29.812 "write": true, 00:04:29.812 "unmap": true, 00:04:29.812 "flush": true, 00:04:29.812 "reset": true, 00:04:29.812 "nvme_admin": false, 00:04:29.812 "nvme_io": false, 00:04:29.812 "nvme_io_md": false, 00:04:29.812 "write_zeroes": true, 00:04:29.812 "zcopy": true, 00:04:29.812 "get_zone_info": false, 00:04:29.812 "zone_management": false, 00:04:29.812 "zone_append": false, 00:04:29.812 "compare": false, 00:04:29.812 "compare_and_write": false, 00:04:29.812 "abort": true, 00:04:29.812 "seek_hole": false, 00:04:29.812 "seek_data": false, 00:04:29.812 "copy": true, 00:04:29.812 "nvme_iov_md": false 00:04:29.812 }, 00:04:29.812 "memory_domains": [ 00:04:29.812 { 00:04:29.812 "dma_device_id": "system", 00:04:29.812 "dma_device_type": 1 00:04:29.812 }, 00:04:29.812 { 00:04:29.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.812 "dma_device_type": 2 00:04:29.812 } 00:04:29.812 ], 00:04:29.812 "driver_specific": {} 00:04:29.812 } 00:04:29.812 ]' 00:04:29.812 16:21:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:29.812 16:21:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:29.812 16:21:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:29.812 16:21:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.812 16:21:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.812 [2024-07-15 16:21:09.352950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:29.812 [2024-07-15 16:21:09.352980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:29.812 [2024-07-15 16:21:09.352997] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x48b8860 00:04:29.812 [2024-07-15 16:21:09.353006] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:29.812 [2024-07-15 16:21:09.353707] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:29.812 [2024-07-15 16:21:09.353728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:29.812 Passthru0 00:04:29.812 16:21:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.812 16:21:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:29.812 16:21:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.812 16:21:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.812 16:21:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.812 16:21:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:29.812 { 00:04:29.812 "name": "Malloc2", 00:04:29.812 "aliases": [ 00:04:29.812 "ddd3375f-47dd-4f10-adde-e4d82ec409e2" 00:04:29.812 ], 00:04:29.812 "product_name": "Malloc disk", 00:04:29.812 "block_size": 512, 00:04:29.812 "num_blocks": 16384, 00:04:29.812 "uuid": "ddd3375f-47dd-4f10-adde-e4d82ec409e2", 00:04:29.812 "assigned_rate_limits": { 00:04:29.812 "rw_ios_per_sec": 0, 00:04:29.812 "rw_mbytes_per_sec": 0, 00:04:29.812 "r_mbytes_per_sec": 0, 00:04:29.812 "w_mbytes_per_sec": 0 00:04:29.812 }, 00:04:29.812 "claimed": true, 00:04:29.812 "claim_type": "exclusive_write", 00:04:29.812 "zoned": false, 00:04:29.812 "supported_io_types": { 00:04:29.812 "read": true, 00:04:29.812 "write": true, 00:04:29.812 "unmap": true, 00:04:29.812 "flush": true, 00:04:29.812 "reset": true, 00:04:29.812 "nvme_admin": false, 00:04:29.812 "nvme_io": false, 00:04:29.813 "nvme_io_md": false, 00:04:29.813 "write_zeroes": true, 00:04:29.813 "zcopy": true, 00:04:29.813 "get_zone_info": false, 00:04:29.813 "zone_management": false, 00:04:29.813 "zone_append": false, 00:04:29.813 "compare": false, 00:04:29.813 "compare_and_write": false, 00:04:29.813 "abort": true, 00:04:29.813 "seek_hole": false, 00:04:29.813 "seek_data": false, 00:04:29.813 "copy": true, 00:04:29.813 "nvme_iov_md": false 00:04:29.813 }, 00:04:29.813 "memory_domains": [ 00:04:29.813 { 00:04:29.813 "dma_device_id": "system", 00:04:29.813 "dma_device_type": 1 00:04:29.813 }, 00:04:29.813 { 00:04:29.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.813 "dma_device_type": 2 00:04:29.813 } 00:04:29.813 ], 00:04:29.813 "driver_specific": {} 00:04:29.813 }, 00:04:29.813 { 00:04:29.813 "name": "Passthru0", 00:04:29.813 "aliases": [ 00:04:29.813 "cd9031d6-61ba-55a7-9593-5f037eaacea2" 00:04:29.813 ], 00:04:29.813 "product_name": "passthru", 00:04:29.813 "block_size": 512, 00:04:29.813 "num_blocks": 16384, 00:04:29.813 "uuid": "cd9031d6-61ba-55a7-9593-5f037eaacea2", 00:04:29.813 "assigned_rate_limits": { 00:04:29.813 "rw_ios_per_sec": 0, 00:04:29.813 "rw_mbytes_per_sec": 0, 00:04:29.813 "r_mbytes_per_sec": 0, 00:04:29.813 "w_mbytes_per_sec": 0 00:04:29.813 }, 00:04:29.813 "claimed": false, 00:04:29.813 "zoned": false, 00:04:29.813 "supported_io_types": { 00:04:29.813 "read": true, 00:04:29.813 "write": true, 00:04:29.813 "unmap": true, 00:04:29.813 "flush": true, 00:04:29.813 "reset": true, 00:04:29.813 "nvme_admin": false, 00:04:29.813 "nvme_io": false, 00:04:29.813 "nvme_io_md": false, 00:04:29.813 "write_zeroes": true, 00:04:29.813 "zcopy": true, 00:04:29.813 "get_zone_info": false, 00:04:29.813 "zone_management": false, 00:04:29.813 "zone_append": false, 00:04:29.813 "compare": false, 00:04:29.813 "compare_and_write": false, 00:04:29.813 "abort": true, 00:04:29.813 "seek_hole": false, 00:04:29.813 "seek_data": false, 00:04:29.813 "copy": true, 00:04:29.813 "nvme_iov_md": false 00:04:29.813 }, 00:04:29.813 "memory_domains": [ 00:04:29.813 { 00:04:29.813 "dma_device_id": "system", 00:04:29.813 "dma_device_type": 1 00:04:29.813 }, 00:04:29.813 { 00:04:29.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.813 "dma_device_type": 2 00:04:29.813 } 00:04:29.813 ], 00:04:29.813 "driver_specific": { 00:04:29.813 "passthru": { 00:04:29.813 "name": "Passthru0", 00:04:29.813 "base_bdev_name": "Malloc2" 00:04:29.813 } 00:04:29.813 } 00:04:29.813 } 00:04:29.813 ]' 00:04:29.813 16:21:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:30.071 16:21:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:30.071 16:21:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:30.071 16:21:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:30.071 16:21:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.071 16:21:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:30.071 16:21:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:30.071 16:21:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:30.071 16:21:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.071 16:21:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:30.071 16:21:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:30.071 16:21:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:30.071 16:21:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.071 16:21:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:30.071 16:21:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:30.071 16:21:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:30.071 16:21:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:30.071 00:04:30.071 real 0m0.261s 00:04:30.071 user 0m0.150s 00:04:30.071 sys 0m0.057s 00:04:30.071 16:21:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.071 16:21:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.071 ************************************ 00:04:30.071 END TEST rpc_daemon_integrity 00:04:30.071 ************************************ 00:04:30.071 16:21:09 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:30.071 16:21:09 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:30.071 16:21:09 rpc -- rpc/rpc.sh@84 -- # killprocess 2001347 00:04:30.071 16:21:09 rpc -- common/autotest_common.sh@948 -- # '[' -z 2001347 ']' 00:04:30.071 16:21:09 rpc -- common/autotest_common.sh@952 -- # kill -0 2001347 00:04:30.071 16:21:09 rpc -- common/autotest_common.sh@953 -- # uname 00:04:30.071 16:21:09 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:30.071 16:21:09 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2001347 00:04:30.071 16:21:09 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:30.071 16:21:09 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:30.071 16:21:09 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2001347' 00:04:30.071 killing process with pid 2001347 00:04:30.071 16:21:09 rpc -- common/autotest_common.sh@967 -- # kill 2001347 00:04:30.071 16:21:09 rpc -- common/autotest_common.sh@972 -- # wait 2001347 00:04:30.330 00:04:30.330 real 0m2.499s 00:04:30.330 user 0m3.132s 00:04:30.330 sys 0m0.807s 00:04:30.330 16:21:09 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.330 16:21:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.330 ************************************ 00:04:30.330 END TEST rpc 00:04:30.330 ************************************ 00:04:30.330 16:21:09 -- common/autotest_common.sh@1142 -- # return 0 00:04:30.330 16:21:09 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:30.330 16:21:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.330 16:21:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.330 16:21:09 -- common/autotest_common.sh@10 -- # set +x 00:04:30.589 ************************************ 00:04:30.589 START TEST skip_rpc 00:04:30.589 ************************************ 00:04:30.589 16:21:09 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:30.589 * Looking for test storage... 00:04:30.589 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:30.589 16:21:10 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:30.589 16:21:10 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:30.589 16:21:10 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:30.589 16:21:10 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.589 16:21:10 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.589 16:21:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.589 ************************************ 00:04:30.589 START TEST skip_rpc 00:04:30.589 ************************************ 00:04:30.589 16:21:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:30.589 16:21:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2002051 00:04:30.589 16:21:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.589 16:21:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:30.589 16:21:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:30.589 [2024-07-15 16:21:10.130595] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:30.589 [2024-07-15 16:21:10.130673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2002051 ] 00:04:30.589 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.847 [2024-07-15 16:21:10.199042] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.847 [2024-07-15 16:21:10.271430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.115 16:21:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:36.115 16:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:36.115 16:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:36.115 16:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:36.115 16:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:36.115 16:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:36.115 16:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:36.115 16:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:36.115 16:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.115 16:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.115 16:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:36.115 16:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:36.115 16:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:36.115 16:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:36.115 16:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:36.115 16:21:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:36.115 16:21:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2002051 00:04:36.115 16:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2002051 ']' 00:04:36.115 16:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2002051 00:04:36.115 16:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:36.115 16:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:36.115 16:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2002051 00:04:36.115 16:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:36.115 16:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:36.115 16:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2002051' 00:04:36.115 killing process with pid 2002051 00:04:36.115 16:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2002051 00:04:36.115 16:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2002051 00:04:36.115 00:04:36.115 real 0m5.373s 00:04:36.115 user 0m5.143s 00:04:36.115 sys 0m0.275s 00:04:36.115 16:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.115 16:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.115 ************************************ 00:04:36.115 END TEST skip_rpc 00:04:36.115 ************************************ 00:04:36.115 16:21:15 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:36.115 16:21:15 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:36.115 16:21:15 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.115 16:21:15 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.115 16:21:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.115 ************************************ 00:04:36.115 START TEST skip_rpc_with_json 00:04:36.115 ************************************ 00:04:36.115 16:21:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:36.115 16:21:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:36.115 16:21:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2002887 00:04:36.115 16:21:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.115 16:21:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:36.115 16:21:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2002887 00:04:36.115 16:21:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2002887 ']' 00:04:36.115 16:21:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.115 16:21:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:36.115 16:21:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.115 16:21:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:36.115 16:21:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:36.115 [2024-07-15 16:21:15.585583] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:36.115 [2024-07-15 16:21:15.585649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2002887 ] 00:04:36.115 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.115 [2024-07-15 16:21:15.653966] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.373 [2024-07-15 16:21:15.724596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.939 16:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:36.939 16:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:36.939 16:21:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:36.939 16:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.939 16:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:36.939 [2024-07-15 16:21:16.412104] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:36.939 request: 00:04:36.939 { 00:04:36.939 "trtype": "tcp", 00:04:36.939 "method": "nvmf_get_transports", 00:04:36.939 "req_id": 1 00:04:36.939 } 00:04:36.939 Got JSON-RPC error response 00:04:36.939 response: 00:04:36.939 { 00:04:36.939 "code": -19, 00:04:36.939 "message": "No such device" 00:04:36.939 } 00:04:36.939 16:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:36.939 16:21:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:36.939 16:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.939 16:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:36.939 [2024-07-15 16:21:16.424193] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:36.939 16:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.939 16:21:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:36.939 16:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.939 16:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:37.197 16:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.198 16:21:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:37.198 { 00:04:37.198 "subsystems": [ 00:04:37.198 { 00:04:37.198 "subsystem": "scheduler", 00:04:37.198 "config": [ 00:04:37.198 { 00:04:37.198 "method": "framework_set_scheduler", 00:04:37.198 "params": { 00:04:37.198 "name": "static" 00:04:37.198 } 00:04:37.198 } 00:04:37.198 ] 00:04:37.198 }, 00:04:37.198 { 00:04:37.198 "subsystem": "vmd", 00:04:37.198 "config": [] 00:04:37.198 }, 00:04:37.198 { 00:04:37.198 "subsystem": "sock", 00:04:37.198 "config": [ 00:04:37.198 { 00:04:37.198 "method": "sock_set_default_impl", 00:04:37.198 "params": { 00:04:37.198 "impl_name": "posix" 00:04:37.198 } 00:04:37.198 }, 00:04:37.198 { 00:04:37.198 "method": "sock_impl_set_options", 00:04:37.198 "params": { 00:04:37.198 "impl_name": "ssl", 00:04:37.198 "recv_buf_size": 4096, 00:04:37.198 "send_buf_size": 4096, 00:04:37.198 "enable_recv_pipe": true, 00:04:37.198 "enable_quickack": false, 00:04:37.198 "enable_placement_id": 0, 00:04:37.198 "enable_zerocopy_send_server": true, 00:04:37.198 "enable_zerocopy_send_client": false, 00:04:37.198 "zerocopy_threshold": 0, 00:04:37.198 "tls_version": 0, 00:04:37.198 "enable_ktls": false 00:04:37.198 } 00:04:37.198 }, 00:04:37.198 { 00:04:37.198 "method": "sock_impl_set_options", 00:04:37.198 "params": { 00:04:37.198 "impl_name": "posix", 00:04:37.198 "recv_buf_size": 2097152, 00:04:37.198 "send_buf_size": 2097152, 00:04:37.198 "enable_recv_pipe": true, 00:04:37.198 "enable_quickack": false, 00:04:37.198 "enable_placement_id": 0, 00:04:37.198 "enable_zerocopy_send_server": true, 00:04:37.198 "enable_zerocopy_send_client": false, 00:04:37.198 "zerocopy_threshold": 0, 00:04:37.198 "tls_version": 0, 00:04:37.198 "enable_ktls": false 00:04:37.198 } 00:04:37.198 } 00:04:37.198 ] 00:04:37.198 }, 00:04:37.198 { 00:04:37.198 "subsystem": "iobuf", 00:04:37.198 "config": [ 00:04:37.198 { 00:04:37.198 "method": "iobuf_set_options", 00:04:37.198 "params": { 00:04:37.198 "small_pool_count": 8192, 00:04:37.198 "large_pool_count": 1024, 00:04:37.198 "small_bufsize": 8192, 00:04:37.198 "large_bufsize": 135168 00:04:37.198 } 00:04:37.198 } 00:04:37.198 ] 00:04:37.198 }, 00:04:37.198 { 00:04:37.198 "subsystem": "keyring", 00:04:37.198 "config": [] 00:04:37.198 }, 00:04:37.198 { 00:04:37.198 "subsystem": "vfio_user_target", 00:04:37.198 "config": null 00:04:37.198 }, 00:04:37.198 { 00:04:37.198 "subsystem": "accel", 00:04:37.198 "config": [ 00:04:37.198 { 00:04:37.198 "method": "accel_set_options", 00:04:37.198 "params": { 00:04:37.198 "small_cache_size": 128, 00:04:37.198 "large_cache_size": 16, 00:04:37.198 "task_count": 2048, 00:04:37.198 "sequence_count": 2048, 00:04:37.198 "buf_count": 2048 00:04:37.198 } 00:04:37.198 } 00:04:37.198 ] 00:04:37.198 }, 00:04:37.198 { 00:04:37.198 "subsystem": "bdev", 00:04:37.198 "config": [ 00:04:37.198 { 00:04:37.198 "method": "bdev_set_options", 00:04:37.198 "params": { 00:04:37.198 "bdev_io_pool_size": 65535, 00:04:37.198 "bdev_io_cache_size": 256, 00:04:37.198 "bdev_auto_examine": true, 00:04:37.198 "iobuf_small_cache_size": 128, 00:04:37.198 "iobuf_large_cache_size": 16 00:04:37.198 } 00:04:37.198 }, 00:04:37.198 { 00:04:37.198 "method": "bdev_raid_set_options", 00:04:37.198 "params": { 00:04:37.198 "process_window_size_kb": 1024 00:04:37.198 } 00:04:37.198 }, 00:04:37.198 { 00:04:37.198 "method": "bdev_nvme_set_options", 00:04:37.198 "params": { 00:04:37.198 "action_on_timeout": "none", 00:04:37.198 "timeout_us": 0, 00:04:37.198 "timeout_admin_us": 0, 00:04:37.198 "keep_alive_timeout_ms": 10000, 00:04:37.198 "arbitration_burst": 0, 00:04:37.198 "low_priority_weight": 0, 00:04:37.198 "medium_priority_weight": 0, 00:04:37.198 "high_priority_weight": 0, 00:04:37.198 "nvme_adminq_poll_period_us": 10000, 00:04:37.198 "nvme_ioq_poll_period_us": 0, 00:04:37.198 "io_queue_requests": 0, 00:04:37.198 "delay_cmd_submit": true, 00:04:37.198 "transport_retry_count": 4, 00:04:37.198 "bdev_retry_count": 3, 00:04:37.198 "transport_ack_timeout": 0, 00:04:37.198 "ctrlr_loss_timeout_sec": 0, 00:04:37.198 "reconnect_delay_sec": 0, 00:04:37.198 "fast_io_fail_timeout_sec": 0, 00:04:37.198 "disable_auto_failback": false, 00:04:37.198 "generate_uuids": false, 00:04:37.198 "transport_tos": 0, 00:04:37.198 "nvme_error_stat": false, 00:04:37.198 "rdma_srq_size": 0, 00:04:37.198 "io_path_stat": false, 00:04:37.198 "allow_accel_sequence": false, 00:04:37.198 "rdma_max_cq_size": 0, 00:04:37.198 "rdma_cm_event_timeout_ms": 0, 00:04:37.198 "dhchap_digests": [ 00:04:37.198 "sha256", 00:04:37.198 "sha384", 00:04:37.198 "sha512" 00:04:37.198 ], 00:04:37.198 "dhchap_dhgroups": [ 00:04:37.198 "null", 00:04:37.198 "ffdhe2048", 00:04:37.198 "ffdhe3072", 00:04:37.198 "ffdhe4096", 00:04:37.198 "ffdhe6144", 00:04:37.198 "ffdhe8192" 00:04:37.198 ] 00:04:37.198 } 00:04:37.198 }, 00:04:37.198 { 00:04:37.198 "method": "bdev_nvme_set_hotplug", 00:04:37.198 "params": { 00:04:37.198 "period_us": 100000, 00:04:37.198 "enable": false 00:04:37.198 } 00:04:37.198 }, 00:04:37.198 { 00:04:37.198 "method": "bdev_iscsi_set_options", 00:04:37.198 "params": { 00:04:37.198 "timeout_sec": 30 00:04:37.198 } 00:04:37.198 }, 00:04:37.198 { 00:04:37.198 "method": "bdev_wait_for_examine" 00:04:37.198 } 00:04:37.198 ] 00:04:37.198 }, 00:04:37.198 { 00:04:37.198 "subsystem": "nvmf", 00:04:37.198 "config": [ 00:04:37.198 { 00:04:37.198 "method": "nvmf_set_config", 00:04:37.198 "params": { 00:04:37.198 "discovery_filter": "match_any", 00:04:37.198 "admin_cmd_passthru": { 00:04:37.198 "identify_ctrlr": false 00:04:37.198 } 00:04:37.198 } 00:04:37.198 }, 00:04:37.198 { 00:04:37.198 "method": "nvmf_set_max_subsystems", 00:04:37.198 "params": { 00:04:37.198 "max_subsystems": 1024 00:04:37.198 } 00:04:37.198 }, 00:04:37.198 { 00:04:37.198 "method": "nvmf_set_crdt", 00:04:37.198 "params": { 00:04:37.198 "crdt1": 0, 00:04:37.198 "crdt2": 0, 00:04:37.198 "crdt3": 0 00:04:37.198 } 00:04:37.198 }, 00:04:37.198 { 00:04:37.198 "method": "nvmf_create_transport", 00:04:37.198 "params": { 00:04:37.198 "trtype": "TCP", 00:04:37.198 "max_queue_depth": 128, 00:04:37.198 "max_io_qpairs_per_ctrlr": 127, 00:04:37.198 "in_capsule_data_size": 4096, 00:04:37.198 "max_io_size": 131072, 00:04:37.198 "io_unit_size": 131072, 00:04:37.198 "max_aq_depth": 128, 00:04:37.198 "num_shared_buffers": 511, 00:04:37.198 "buf_cache_size": 4294967295, 00:04:37.198 "dif_insert_or_strip": false, 00:04:37.198 "zcopy": false, 00:04:37.198 "c2h_success": true, 00:04:37.198 "sock_priority": 0, 00:04:37.198 "abort_timeout_sec": 1, 00:04:37.198 "ack_timeout": 0, 00:04:37.198 "data_wr_pool_size": 0 00:04:37.198 } 00:04:37.198 } 00:04:37.198 ] 00:04:37.198 }, 00:04:37.198 { 00:04:37.198 "subsystem": "nbd", 00:04:37.198 "config": [] 00:04:37.198 }, 00:04:37.198 { 00:04:37.198 "subsystem": "ublk", 00:04:37.198 "config": [] 00:04:37.198 }, 00:04:37.198 { 00:04:37.198 "subsystem": "vhost_blk", 00:04:37.198 "config": [] 00:04:37.198 }, 00:04:37.198 { 00:04:37.198 "subsystem": "scsi", 00:04:37.198 "config": null 00:04:37.198 }, 00:04:37.198 { 00:04:37.198 "subsystem": "iscsi", 00:04:37.198 "config": [ 00:04:37.198 { 00:04:37.198 "method": "iscsi_set_options", 00:04:37.198 "params": { 00:04:37.198 "node_base": "iqn.2016-06.io.spdk", 00:04:37.198 "max_sessions": 128, 00:04:37.198 "max_connections_per_session": 2, 00:04:37.198 "max_queue_depth": 64, 00:04:37.198 "default_time2wait": 2, 00:04:37.198 "default_time2retain": 20, 00:04:37.198 "first_burst_length": 8192, 00:04:37.198 "immediate_data": true, 00:04:37.198 "allow_duplicated_isid": false, 00:04:37.198 "error_recovery_level": 0, 00:04:37.198 "nop_timeout": 60, 00:04:37.198 "nop_in_interval": 30, 00:04:37.198 "disable_chap": false, 00:04:37.198 "require_chap": false, 00:04:37.198 "mutual_chap": false, 00:04:37.198 "chap_group": 0, 00:04:37.198 "max_large_datain_per_connection": 64, 00:04:37.198 "max_r2t_per_connection": 4, 00:04:37.198 "pdu_pool_size": 36864, 00:04:37.198 "immediate_data_pool_size": 16384, 00:04:37.198 "data_out_pool_size": 2048 00:04:37.198 } 00:04:37.198 } 00:04:37.198 ] 00:04:37.198 }, 00:04:37.198 { 00:04:37.198 "subsystem": "vhost_scsi", 00:04:37.198 "config": [] 00:04:37.198 } 00:04:37.198 ] 00:04:37.198 } 00:04:37.198 16:21:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:37.198 16:21:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2002887 00:04:37.198 16:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2002887 ']' 00:04:37.198 16:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2002887 00:04:37.198 16:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:37.198 16:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:37.198 16:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2002887 00:04:37.199 16:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:37.199 16:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:37.199 16:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2002887' 00:04:37.199 killing process with pid 2002887 00:04:37.199 16:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2002887 00:04:37.199 16:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2002887 00:04:37.456 16:21:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:37.456 16:21:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2003157 00:04:37.456 16:21:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:42.716 16:21:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2003157 00:04:42.716 16:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2003157 ']' 00:04:42.716 16:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2003157 00:04:42.716 16:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:42.716 16:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:42.716 16:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2003157 00:04:42.716 16:21:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:42.716 16:21:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:42.716 16:21:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2003157' 00:04:42.716 killing process with pid 2003157 00:04:42.716 16:21:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2003157 00:04:42.716 16:21:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2003157 00:04:42.975 16:21:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:42.975 16:21:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:42.975 00:04:42.975 real 0m6.757s 00:04:42.975 user 0m6.562s 00:04:42.975 sys 0m0.632s 00:04:42.975 16:21:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.975 16:21:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.975 ************************************ 00:04:42.975 END TEST skip_rpc_with_json 00:04:42.975 ************************************ 00:04:42.975 16:21:22 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:42.975 16:21:22 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:42.975 16:21:22 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.975 16:21:22 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.975 16:21:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.975 ************************************ 00:04:42.975 START TEST skip_rpc_with_delay 00:04:42.975 ************************************ 00:04:42.975 16:21:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:42.975 16:21:22 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:42.975 16:21:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:42.975 16:21:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:42.975 16:21:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:42.975 16:21:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:42.975 16:21:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:42.975 16:21:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:42.975 16:21:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:42.975 16:21:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:42.975 16:21:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:42.975 16:21:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:42.975 16:21:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:42.975 [2024-07-15 16:21:22.410170] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:42.975 [2024-07-15 16:21:22.410294] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:42.975 16:21:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:42.975 16:21:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:42.976 16:21:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:42.976 16:21:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:42.976 00:04:42.976 real 0m0.044s 00:04:42.976 user 0m0.022s 00:04:42.976 sys 0m0.022s 00:04:42.976 16:21:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.976 16:21:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:42.976 ************************************ 00:04:42.976 END TEST skip_rpc_with_delay 00:04:42.976 ************************************ 00:04:42.976 16:21:22 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:42.976 16:21:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:42.976 16:21:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:42.976 16:21:22 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:42.976 16:21:22 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.976 16:21:22 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.976 16:21:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.976 ************************************ 00:04:42.976 START TEST exit_on_failed_rpc_init 00:04:42.976 ************************************ 00:04:42.976 16:21:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:42.976 16:21:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2004272 00:04:42.976 16:21:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2004272 00:04:42.976 16:21:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2004272 ']' 00:04:42.976 16:21:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.976 16:21:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.976 16:21:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:42.976 16:21:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.976 16:21:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:42.976 16:21:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:42.976 [2024-07-15 16:21:22.525156] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:42.976 [2024-07-15 16:21:22.525232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2004272 ] 00:04:42.976 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.234 [2024-07-15 16:21:22.593535] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.234 [2024-07-15 16:21:22.671071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.801 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.801 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:43.801 16:21:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.801 16:21:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:43.801 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:43.801 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:43.801 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.801 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:43.801 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.801 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:43.801 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.801 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:43.801 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.801 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:43.801 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:43.801 [2024-07-15 16:21:23.370642] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:43.801 [2024-07-15 16:21:23.370728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2004301 ] 00:04:44.060 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.060 [2024-07-15 16:21:23.440818] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.060 [2024-07-15 16:21:23.512681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.060 [2024-07-15 16:21:23.512761] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:44.060 [2024-07-15 16:21:23.512774] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:44.060 [2024-07-15 16:21:23.512782] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:44.060 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:44.060 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:44.060 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:44.060 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:44.060 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:44.060 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:44.060 16:21:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:44.060 16:21:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2004272 00:04:44.060 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2004272 ']' 00:04:44.060 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2004272 00:04:44.060 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:44.060 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:44.060 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2004272 00:04:44.060 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:44.060 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:44.060 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2004272' 00:04:44.060 killing process with pid 2004272 00:04:44.060 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2004272 00:04:44.060 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2004272 00:04:44.626 00:04:44.626 real 0m1.423s 00:04:44.626 user 0m1.586s 00:04:44.626 sys 0m0.428s 00:04:44.626 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.626 16:21:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:44.626 ************************************ 00:04:44.626 END TEST exit_on_failed_rpc_init 00:04:44.626 ************************************ 00:04:44.626 16:21:23 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:44.626 16:21:23 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:44.626 00:04:44.626 real 0m14.012s 00:04:44.626 user 0m13.451s 00:04:44.626 sys 0m1.664s 00:04:44.626 16:21:23 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.626 16:21:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.626 ************************************ 00:04:44.626 END TEST skip_rpc 00:04:44.626 ************************************ 00:04:44.626 16:21:24 -- common/autotest_common.sh@1142 -- # return 0 00:04:44.626 16:21:24 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:44.626 16:21:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.626 16:21:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.626 16:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:44.626 ************************************ 00:04:44.626 START TEST rpc_client 00:04:44.626 ************************************ 00:04:44.626 16:21:24 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:44.626 * Looking for test storage... 00:04:44.626 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:04:44.626 16:21:24 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:44.626 OK 00:04:44.626 16:21:24 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:44.626 00:04:44.626 real 0m0.129s 00:04:44.626 user 0m0.051s 00:04:44.626 sys 0m0.087s 00:04:44.626 16:21:24 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.626 16:21:24 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:44.626 ************************************ 00:04:44.626 END TEST rpc_client 00:04:44.626 ************************************ 00:04:44.885 16:21:24 -- common/autotest_common.sh@1142 -- # return 0 00:04:44.885 16:21:24 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:04:44.885 16:21:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.885 16:21:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.885 16:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:44.885 ************************************ 00:04:44.885 START TEST json_config 00:04:44.885 ************************************ 00:04:44.885 16:21:24 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:04:44.885 16:21:24 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:44.885 16:21:24 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:44.885 16:21:24 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:44.885 16:21:24 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:44.885 16:21:24 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:44.885 16:21:24 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:44.885 16:21:24 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:44.885 16:21:24 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:44.885 16:21:24 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:44.885 16:21:24 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:44.885 16:21:24 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:44.885 16:21:24 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:44.885 16:21:24 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:04:44.885 16:21:24 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:04:44.885 16:21:24 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:44.885 16:21:24 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:44.885 16:21:24 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:44.885 16:21:24 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:44.885 16:21:24 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:44.885 16:21:24 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:44.885 16:21:24 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:44.885 16:21:24 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:44.885 16:21:24 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.885 16:21:24 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.885 16:21:24 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.885 16:21:24 json_config -- paths/export.sh@5 -- # export PATH 00:04:44.885 16:21:24 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.885 16:21:24 json_config -- nvmf/common.sh@47 -- # : 0 00:04:44.885 16:21:24 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:44.885 16:21:24 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:44.885 16:21:24 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:44.885 16:21:24 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:44.885 16:21:24 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:44.885 16:21:24 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:44.885 16:21:24 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:44.885 16:21:24 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:44.885 16:21:24 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:04:44.885 16:21:24 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:44.885 16:21:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:44.885 16:21:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:44.885 16:21:24 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:44.885 16:21:24 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:44.885 WARNING: No tests are enabled so not running JSON configuration tests 00:04:44.885 16:21:24 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:44.885 00:04:44.885 real 0m0.108s 00:04:44.885 user 0m0.054s 00:04:44.885 sys 0m0.055s 00:04:44.885 16:21:24 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.885 16:21:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.885 ************************************ 00:04:44.885 END TEST json_config 00:04:44.885 ************************************ 00:04:44.885 16:21:24 -- common/autotest_common.sh@1142 -- # return 0 00:04:44.885 16:21:24 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:44.885 16:21:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.885 16:21:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.885 16:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:44.885 ************************************ 00:04:44.885 START TEST json_config_extra_key 00:04:44.885 ************************************ 00:04:44.885 16:21:24 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:45.144 16:21:24 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:45.144 16:21:24 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:45.144 16:21:24 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:45.144 16:21:24 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:45.144 16:21:24 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:45.144 16:21:24 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:45.144 16:21:24 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:45.144 16:21:24 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:45.144 16:21:24 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:45.144 16:21:24 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:45.144 16:21:24 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:45.144 16:21:24 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:45.144 16:21:24 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:04:45.144 16:21:24 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:04:45.144 16:21:24 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:45.144 16:21:24 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:45.144 16:21:24 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:45.144 16:21:24 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:45.144 16:21:24 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:45.144 16:21:24 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:45.144 16:21:24 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:45.144 16:21:24 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:45.144 16:21:24 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.144 16:21:24 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.144 16:21:24 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.144 16:21:24 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:45.144 16:21:24 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.144 16:21:24 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:45.144 16:21:24 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:45.144 16:21:24 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:45.144 16:21:24 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:45.144 16:21:24 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:45.144 16:21:24 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:45.144 16:21:24 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:45.144 16:21:24 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:45.144 16:21:24 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:45.144 16:21:24 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:04:45.144 16:21:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:45.144 16:21:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:45.144 16:21:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:45.144 16:21:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:45.144 16:21:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:45.144 16:21:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:45.144 16:21:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:45.144 16:21:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:45.144 16:21:24 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:45.144 16:21:24 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:45.144 INFO: launching applications... 00:04:45.144 16:21:24 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:04:45.144 16:21:24 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:45.144 16:21:24 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:45.144 16:21:24 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:45.144 16:21:24 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:45.144 16:21:24 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:45.144 16:21:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:45.144 16:21:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:45.144 16:21:24 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2004701 00:04:45.144 16:21:24 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:45.144 Waiting for target to run... 00:04:45.144 16:21:24 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2004701 /var/tmp/spdk_tgt.sock 00:04:45.144 16:21:24 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2004701 ']' 00:04:45.144 16:21:24 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:45.144 16:21:24 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:04:45.144 16:21:24 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:45.144 16:21:24 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:45.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:45.144 16:21:24 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:45.144 16:21:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:45.144 [2024-07-15 16:21:24.581474] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:45.144 [2024-07-15 16:21:24.581565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2004701 ] 00:04:45.144 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.709 [2024-07-15 16:21:25.021302] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.709 [2024-07-15 16:21:25.108609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.966 16:21:25 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:45.966 16:21:25 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:45.966 16:21:25 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:45.966 00:04:45.966 16:21:25 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:45.966 INFO: shutting down applications... 00:04:45.966 16:21:25 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:45.966 16:21:25 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:45.966 16:21:25 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:45.966 16:21:25 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2004701 ]] 00:04:45.966 16:21:25 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2004701 00:04:45.966 16:21:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:45.966 16:21:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.966 16:21:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2004701 00:04:45.966 16:21:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.534 16:21:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.534 16:21:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.534 16:21:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2004701 00:04:46.534 16:21:25 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:46.534 16:21:25 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:46.534 16:21:25 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:46.534 16:21:25 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:46.534 SPDK target shutdown done 00:04:46.534 16:21:25 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:46.534 Success 00:04:46.534 00:04:46.534 real 0m1.454s 00:04:46.534 user 0m1.035s 00:04:46.534 sys 0m0.545s 00:04:46.534 16:21:25 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.534 16:21:25 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:46.534 ************************************ 00:04:46.534 END TEST json_config_extra_key 00:04:46.534 ************************************ 00:04:46.534 16:21:25 -- common/autotest_common.sh@1142 -- # return 0 00:04:46.534 16:21:25 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:46.534 16:21:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.534 16:21:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.534 16:21:25 -- common/autotest_common.sh@10 -- # set +x 00:04:46.534 ************************************ 00:04:46.534 START TEST alias_rpc 00:04:46.534 ************************************ 00:04:46.534 16:21:25 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:46.534 * Looking for test storage... 00:04:46.534 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:04:46.534 16:21:26 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:46.534 16:21:26 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2005010 00:04:46.534 16:21:26 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.534 16:21:26 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2005010 00:04:46.534 16:21:26 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2005010 ']' 00:04:46.534 16:21:26 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.534 16:21:26 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.534 16:21:26 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.534 16:21:26 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.534 16:21:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.534 [2024-07-15 16:21:26.110520] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:46.534 [2024-07-15 16:21:26.110592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2005010 ] 00:04:46.794 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.794 [2024-07-15 16:21:26.180564] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.794 [2024-07-15 16:21:26.256820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.362 16:21:26 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:47.362 16:21:26 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:47.362 16:21:26 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:47.621 16:21:27 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2005010 00:04:47.621 16:21:27 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2005010 ']' 00:04:47.621 16:21:27 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2005010 00:04:47.621 16:21:27 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:47.621 16:21:27 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:47.621 16:21:27 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2005010 00:04:47.621 16:21:27 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:47.621 16:21:27 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:47.621 16:21:27 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2005010' 00:04:47.621 killing process with pid 2005010 00:04:47.621 16:21:27 alias_rpc -- common/autotest_common.sh@967 -- # kill 2005010 00:04:47.621 16:21:27 alias_rpc -- common/autotest_common.sh@972 -- # wait 2005010 00:04:47.880 00:04:47.880 real 0m1.484s 00:04:47.880 user 0m1.575s 00:04:47.880 sys 0m0.441s 00:04:47.880 16:21:27 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.880 16:21:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.880 ************************************ 00:04:47.880 END TEST alias_rpc 00:04:47.880 ************************************ 00:04:48.141 16:21:27 -- common/autotest_common.sh@1142 -- # return 0 00:04:48.141 16:21:27 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:48.141 16:21:27 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:48.141 16:21:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.141 16:21:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.141 16:21:27 -- common/autotest_common.sh@10 -- # set +x 00:04:48.141 ************************************ 00:04:48.141 START TEST spdkcli_tcp 00:04:48.141 ************************************ 00:04:48.141 16:21:27 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:48.141 * Looking for test storage... 00:04:48.141 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:04:48.141 16:21:27 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:04:48.141 16:21:27 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:48.141 16:21:27 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:04:48.141 16:21:27 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:48.141 16:21:27 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:48.141 16:21:27 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:48.141 16:21:27 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:48.141 16:21:27 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:48.141 16:21:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:48.141 16:21:27 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2005335 00:04:48.141 16:21:27 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2005335 00:04:48.141 16:21:27 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:48.141 16:21:27 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2005335 ']' 00:04:48.141 16:21:27 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.141 16:21:27 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:48.141 16:21:27 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.141 16:21:27 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:48.141 16:21:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:48.141 [2024-07-15 16:21:27.689578] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:48.141 [2024-07-15 16:21:27.689649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2005335 ] 00:04:48.141 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.399 [2024-07-15 16:21:27.756389] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:48.399 [2024-07-15 16:21:27.834799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.399 [2024-07-15 16:21:27.834802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.655 16:21:28 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.655 16:21:28 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:48.655 16:21:28 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2005406 00:04:48.655 16:21:28 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:48.655 16:21:28 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:48.655 [ 00:04:48.655 "spdk_get_version", 00:04:48.655 "rpc_get_methods", 00:04:48.655 "trace_get_info", 00:04:48.655 "trace_get_tpoint_group_mask", 00:04:48.655 "trace_disable_tpoint_group", 00:04:48.655 "trace_enable_tpoint_group", 00:04:48.655 "trace_clear_tpoint_mask", 00:04:48.655 "trace_set_tpoint_mask", 00:04:48.655 "vfu_tgt_set_base_path", 00:04:48.655 "framework_get_pci_devices", 00:04:48.655 "framework_get_config", 00:04:48.655 "framework_get_subsystems", 00:04:48.655 "keyring_get_keys", 00:04:48.655 "iobuf_get_stats", 00:04:48.655 "iobuf_set_options", 00:04:48.655 "sock_get_default_impl", 00:04:48.655 "sock_set_default_impl", 00:04:48.655 "sock_impl_set_options", 00:04:48.655 "sock_impl_get_options", 00:04:48.655 "vmd_rescan", 00:04:48.655 "vmd_remove_device", 00:04:48.655 "vmd_enable", 00:04:48.655 "accel_get_stats", 00:04:48.655 "accel_set_options", 00:04:48.655 "accel_set_driver", 00:04:48.655 "accel_crypto_key_destroy", 00:04:48.655 "accel_crypto_keys_get", 00:04:48.655 "accel_crypto_key_create", 00:04:48.655 "accel_assign_opc", 00:04:48.655 "accel_get_module_info", 00:04:48.655 "accel_get_opc_assignments", 00:04:48.655 "notify_get_notifications", 00:04:48.655 "notify_get_types", 00:04:48.655 "bdev_get_histogram", 00:04:48.655 "bdev_enable_histogram", 00:04:48.655 "bdev_set_qos_limit", 00:04:48.656 "bdev_set_qd_sampling_period", 00:04:48.656 "bdev_get_bdevs", 00:04:48.656 "bdev_reset_iostat", 00:04:48.656 "bdev_get_iostat", 00:04:48.656 "bdev_examine", 00:04:48.656 "bdev_wait_for_examine", 00:04:48.656 "bdev_set_options", 00:04:48.656 "scsi_get_devices", 00:04:48.656 "thread_set_cpumask", 00:04:48.656 "framework_get_governor", 00:04:48.656 "framework_get_scheduler", 00:04:48.656 "framework_set_scheduler", 00:04:48.656 "framework_get_reactors", 00:04:48.656 "thread_get_io_channels", 00:04:48.656 "thread_get_pollers", 00:04:48.656 "thread_get_stats", 00:04:48.656 "framework_monitor_context_switch", 00:04:48.656 "spdk_kill_instance", 00:04:48.656 "log_enable_timestamps", 00:04:48.656 "log_get_flags", 00:04:48.656 "log_clear_flag", 00:04:48.656 "log_set_flag", 00:04:48.656 "log_get_level", 00:04:48.656 "log_set_level", 00:04:48.656 "log_get_print_level", 00:04:48.656 "log_set_print_level", 00:04:48.656 "framework_enable_cpumask_locks", 00:04:48.656 "framework_disable_cpumask_locks", 00:04:48.656 "framework_wait_init", 00:04:48.656 "framework_start_init", 00:04:48.656 "virtio_blk_create_transport", 00:04:48.656 "virtio_blk_get_transports", 00:04:48.656 "vhost_controller_set_coalescing", 00:04:48.656 "vhost_get_controllers", 00:04:48.656 "vhost_delete_controller", 00:04:48.656 "vhost_create_blk_controller", 00:04:48.656 "vhost_scsi_controller_remove_target", 00:04:48.656 "vhost_scsi_controller_add_target", 00:04:48.656 "vhost_start_scsi_controller", 00:04:48.656 "vhost_create_scsi_controller", 00:04:48.656 "ublk_recover_disk", 00:04:48.656 "ublk_get_disks", 00:04:48.656 "ublk_stop_disk", 00:04:48.656 "ublk_start_disk", 00:04:48.656 "ublk_destroy_target", 00:04:48.656 "ublk_create_target", 00:04:48.656 "nbd_get_disks", 00:04:48.656 "nbd_stop_disk", 00:04:48.656 "nbd_start_disk", 00:04:48.656 "env_dpdk_get_mem_stats", 00:04:48.656 "nvmf_stop_mdns_prr", 00:04:48.656 "nvmf_publish_mdns_prr", 00:04:48.656 "nvmf_subsystem_get_listeners", 00:04:48.656 "nvmf_subsystem_get_qpairs", 00:04:48.656 "nvmf_subsystem_get_controllers", 00:04:48.656 "nvmf_get_stats", 00:04:48.656 "nvmf_get_transports", 00:04:48.656 "nvmf_create_transport", 00:04:48.656 "nvmf_get_targets", 00:04:48.656 "nvmf_delete_target", 00:04:48.656 "nvmf_create_target", 00:04:48.656 "nvmf_subsystem_allow_any_host", 00:04:48.656 "nvmf_subsystem_remove_host", 00:04:48.656 "nvmf_subsystem_add_host", 00:04:48.656 "nvmf_ns_remove_host", 00:04:48.656 "nvmf_ns_add_host", 00:04:48.656 "nvmf_subsystem_remove_ns", 00:04:48.656 "nvmf_subsystem_add_ns", 00:04:48.656 "nvmf_subsystem_listener_set_ana_state", 00:04:48.656 "nvmf_discovery_get_referrals", 00:04:48.656 "nvmf_discovery_remove_referral", 00:04:48.656 "nvmf_discovery_add_referral", 00:04:48.656 "nvmf_subsystem_remove_listener", 00:04:48.656 "nvmf_subsystem_add_listener", 00:04:48.656 "nvmf_delete_subsystem", 00:04:48.656 "nvmf_create_subsystem", 00:04:48.656 "nvmf_get_subsystems", 00:04:48.656 "nvmf_set_crdt", 00:04:48.656 "nvmf_set_config", 00:04:48.656 "nvmf_set_max_subsystems", 00:04:48.656 "iscsi_get_histogram", 00:04:48.656 "iscsi_enable_histogram", 00:04:48.656 "iscsi_set_options", 00:04:48.656 "iscsi_get_auth_groups", 00:04:48.656 "iscsi_auth_group_remove_secret", 00:04:48.656 "iscsi_auth_group_add_secret", 00:04:48.656 "iscsi_delete_auth_group", 00:04:48.656 "iscsi_create_auth_group", 00:04:48.656 "iscsi_set_discovery_auth", 00:04:48.656 "iscsi_get_options", 00:04:48.656 "iscsi_target_node_request_logout", 00:04:48.656 "iscsi_target_node_set_redirect", 00:04:48.656 "iscsi_target_node_set_auth", 00:04:48.656 "iscsi_target_node_add_lun", 00:04:48.656 "iscsi_get_stats", 00:04:48.656 "iscsi_get_connections", 00:04:48.656 "iscsi_portal_group_set_auth", 00:04:48.656 "iscsi_start_portal_group", 00:04:48.656 "iscsi_delete_portal_group", 00:04:48.656 "iscsi_create_portal_group", 00:04:48.656 "iscsi_get_portal_groups", 00:04:48.656 "iscsi_delete_target_node", 00:04:48.656 "iscsi_target_node_remove_pg_ig_maps", 00:04:48.656 "iscsi_target_node_add_pg_ig_maps", 00:04:48.656 "iscsi_create_target_node", 00:04:48.656 "iscsi_get_target_nodes", 00:04:48.656 "iscsi_delete_initiator_group", 00:04:48.656 "iscsi_initiator_group_remove_initiators", 00:04:48.656 "iscsi_initiator_group_add_initiators", 00:04:48.656 "iscsi_create_initiator_group", 00:04:48.656 "iscsi_get_initiator_groups", 00:04:48.656 "keyring_linux_set_options", 00:04:48.656 "keyring_file_remove_key", 00:04:48.656 "keyring_file_add_key", 00:04:48.656 "vfu_virtio_create_scsi_endpoint", 00:04:48.656 "vfu_virtio_scsi_remove_target", 00:04:48.656 "vfu_virtio_scsi_add_target", 00:04:48.656 "vfu_virtio_create_blk_endpoint", 00:04:48.656 "vfu_virtio_delete_endpoint", 00:04:48.656 "iaa_scan_accel_module", 00:04:48.656 "dsa_scan_accel_module", 00:04:48.656 "ioat_scan_accel_module", 00:04:48.656 "accel_error_inject_error", 00:04:48.656 "bdev_iscsi_delete", 00:04:48.656 "bdev_iscsi_create", 00:04:48.656 "bdev_iscsi_set_options", 00:04:48.656 "bdev_virtio_attach_controller", 00:04:48.656 "bdev_virtio_scsi_get_devices", 00:04:48.656 "bdev_virtio_detach_controller", 00:04:48.656 "bdev_virtio_blk_set_hotplug", 00:04:48.656 "bdev_ftl_set_property", 00:04:48.656 "bdev_ftl_get_properties", 00:04:48.656 "bdev_ftl_get_stats", 00:04:48.656 "bdev_ftl_unmap", 00:04:48.656 "bdev_ftl_unload", 00:04:48.656 "bdev_ftl_delete", 00:04:48.656 "bdev_ftl_load", 00:04:48.656 "bdev_ftl_create", 00:04:48.656 "bdev_aio_delete", 00:04:48.656 "bdev_aio_rescan", 00:04:48.656 "bdev_aio_create", 00:04:48.656 "blobfs_create", 00:04:48.656 "blobfs_detect", 00:04:48.656 "blobfs_set_cache_size", 00:04:48.656 "bdev_zone_block_delete", 00:04:48.656 "bdev_zone_block_create", 00:04:48.656 "bdev_delay_delete", 00:04:48.656 "bdev_delay_create", 00:04:48.656 "bdev_delay_update_latency", 00:04:48.656 "bdev_split_delete", 00:04:48.656 "bdev_split_create", 00:04:48.656 "bdev_error_inject_error", 00:04:48.656 "bdev_error_delete", 00:04:48.656 "bdev_error_create", 00:04:48.656 "bdev_raid_set_options", 00:04:48.656 "bdev_raid_remove_base_bdev", 00:04:48.656 "bdev_raid_add_base_bdev", 00:04:48.656 "bdev_raid_delete", 00:04:48.656 "bdev_raid_create", 00:04:48.656 "bdev_raid_get_bdevs", 00:04:48.656 "bdev_lvol_set_parent_bdev", 00:04:48.656 "bdev_lvol_set_parent", 00:04:48.656 "bdev_lvol_check_shallow_copy", 00:04:48.656 "bdev_lvol_start_shallow_copy", 00:04:48.656 "bdev_lvol_grow_lvstore", 00:04:48.656 "bdev_lvol_get_lvols", 00:04:48.656 "bdev_lvol_get_lvstores", 00:04:48.656 "bdev_lvol_delete", 00:04:48.656 "bdev_lvol_set_read_only", 00:04:48.656 "bdev_lvol_resize", 00:04:48.656 "bdev_lvol_decouple_parent", 00:04:48.656 "bdev_lvol_inflate", 00:04:48.656 "bdev_lvol_rename", 00:04:48.656 "bdev_lvol_clone_bdev", 00:04:48.656 "bdev_lvol_clone", 00:04:48.656 "bdev_lvol_snapshot", 00:04:48.656 "bdev_lvol_create", 00:04:48.656 "bdev_lvol_delete_lvstore", 00:04:48.656 "bdev_lvol_rename_lvstore", 00:04:48.656 "bdev_lvol_create_lvstore", 00:04:48.656 "bdev_passthru_delete", 00:04:48.656 "bdev_passthru_create", 00:04:48.656 "bdev_nvme_cuse_unregister", 00:04:48.656 "bdev_nvme_cuse_register", 00:04:48.656 "bdev_opal_new_user", 00:04:48.656 "bdev_opal_set_lock_state", 00:04:48.656 "bdev_opal_delete", 00:04:48.656 "bdev_opal_get_info", 00:04:48.656 "bdev_opal_create", 00:04:48.656 "bdev_nvme_opal_revert", 00:04:48.656 "bdev_nvme_opal_init", 00:04:48.656 "bdev_nvme_send_cmd", 00:04:48.656 "bdev_nvme_get_path_iostat", 00:04:48.656 "bdev_nvme_get_mdns_discovery_info", 00:04:48.656 "bdev_nvme_stop_mdns_discovery", 00:04:48.656 "bdev_nvme_start_mdns_discovery", 00:04:48.656 "bdev_nvme_set_multipath_policy", 00:04:48.656 "bdev_nvme_set_preferred_path", 00:04:48.656 "bdev_nvme_get_io_paths", 00:04:48.656 "bdev_nvme_remove_error_injection", 00:04:48.656 "bdev_nvme_add_error_injection", 00:04:48.656 "bdev_nvme_get_discovery_info", 00:04:48.656 "bdev_nvme_stop_discovery", 00:04:48.656 "bdev_nvme_start_discovery", 00:04:48.656 "bdev_nvme_get_controller_health_info", 00:04:48.656 "bdev_nvme_disable_controller", 00:04:48.656 "bdev_nvme_enable_controller", 00:04:48.656 "bdev_nvme_reset_controller", 00:04:48.656 "bdev_nvme_get_transport_statistics", 00:04:48.656 "bdev_nvme_apply_firmware", 00:04:48.656 "bdev_nvme_detach_controller", 00:04:48.656 "bdev_nvme_get_controllers", 00:04:48.656 "bdev_nvme_attach_controller", 00:04:48.656 "bdev_nvme_set_hotplug", 00:04:48.656 "bdev_nvme_set_options", 00:04:48.656 "bdev_null_resize", 00:04:48.656 "bdev_null_delete", 00:04:48.656 "bdev_null_create", 00:04:48.656 "bdev_malloc_delete", 00:04:48.656 "bdev_malloc_create" 00:04:48.656 ] 00:04:48.656 16:21:28 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:48.657 16:21:28 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:48.657 16:21:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:48.914 16:21:28 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:48.914 16:21:28 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2005335 00:04:48.914 16:21:28 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2005335 ']' 00:04:48.914 16:21:28 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2005335 00:04:48.914 16:21:28 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:48.914 16:21:28 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:48.914 16:21:28 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2005335 00:04:48.914 16:21:28 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:48.914 16:21:28 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:48.914 16:21:28 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2005335' 00:04:48.914 killing process with pid 2005335 00:04:48.914 16:21:28 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2005335 00:04:48.914 16:21:28 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2005335 00:04:49.172 00:04:49.172 real 0m1.070s 00:04:49.172 user 0m1.791s 00:04:49.172 sys 0m0.454s 00:04:49.172 16:21:28 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.172 16:21:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:49.172 ************************************ 00:04:49.172 END TEST spdkcli_tcp 00:04:49.172 ************************************ 00:04:49.172 16:21:28 -- common/autotest_common.sh@1142 -- # return 0 00:04:49.172 16:21:28 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:49.172 16:21:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.172 16:21:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.172 16:21:28 -- common/autotest_common.sh@10 -- # set +x 00:04:49.172 ************************************ 00:04:49.172 START TEST dpdk_mem_utility 00:04:49.172 ************************************ 00:04:49.172 16:21:28 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:49.430 * Looking for test storage... 00:04:49.430 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:04:49.430 16:21:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:49.430 16:21:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2005663 00:04:49.430 16:21:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2005663 00:04:49.430 16:21:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.430 16:21:28 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2005663 ']' 00:04:49.430 16:21:28 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.430 16:21:28 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:49.430 16:21:28 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.430 16:21:28 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:49.430 16:21:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:49.430 [2024-07-15 16:21:28.810459] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:49.430 [2024-07-15 16:21:28.810525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2005663 ] 00:04:49.430 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.430 [2024-07-15 16:21:28.879423] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.430 [2024-07-15 16:21:28.954137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.364 16:21:29 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:50.364 16:21:29 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:50.364 16:21:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:50.364 16:21:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:50.364 16:21:29 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.364 16:21:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:50.364 { 00:04:50.364 "filename": "/tmp/spdk_mem_dump.txt" 00:04:50.364 } 00:04:50.364 16:21:29 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.364 16:21:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:50.364 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:50.364 1 heaps totaling size 814.000000 MiB 00:04:50.364 size: 814.000000 MiB heap id: 0 00:04:50.364 end heaps---------- 00:04:50.364 8 mempools totaling size 598.116089 MiB 00:04:50.364 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:50.364 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:50.364 size: 84.521057 MiB name: bdev_io_2005663 00:04:50.364 size: 51.011292 MiB name: evtpool_2005663 00:04:50.364 size: 50.003479 MiB name: msgpool_2005663 00:04:50.364 size: 21.763794 MiB name: PDU_Pool 00:04:50.364 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:50.364 size: 0.026123 MiB name: Session_Pool 00:04:50.364 end mempools------- 00:04:50.364 6 memzones totaling size 4.142822 MiB 00:04:50.364 size: 1.000366 MiB name: RG_ring_0_2005663 00:04:50.364 size: 1.000366 MiB name: RG_ring_1_2005663 00:04:50.364 size: 1.000366 MiB name: RG_ring_4_2005663 00:04:50.364 size: 1.000366 MiB name: RG_ring_5_2005663 00:04:50.364 size: 0.125366 MiB name: RG_ring_2_2005663 00:04:50.364 size: 0.015991 MiB name: RG_ring_3_2005663 00:04:50.364 end memzones------- 00:04:50.364 16:21:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:50.364 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:50.364 list of free elements. size: 12.519348 MiB 00:04:50.364 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:50.364 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:50.364 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:50.364 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:50.364 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:50.364 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:50.364 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:50.364 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:50.364 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:50.364 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:50.364 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:50.364 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:50.364 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:50.364 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:50.364 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:50.364 list of standard malloc elements. size: 199.218079 MiB 00:04:50.364 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:50.364 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:50.364 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:50.364 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:50.364 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:50.364 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:50.364 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:50.364 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:50.364 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:50.364 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:50.364 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:50.364 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:50.364 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:50.364 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:50.364 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:50.364 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:50.364 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:50.364 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:50.364 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:50.364 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:50.364 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:50.364 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:50.364 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:50.364 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:50.364 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:50.364 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:50.364 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:50.364 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:50.364 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:50.364 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:50.364 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:50.364 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:50.364 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:50.364 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:50.364 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:50.364 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:50.364 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:50.364 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:50.364 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:50.364 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:50.364 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:50.364 list of memzone associated elements. size: 602.262573 MiB 00:04:50.364 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:50.364 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:50.364 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:50.364 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:50.364 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:50.364 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2005663_0 00:04:50.364 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:50.364 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2005663_0 00:04:50.364 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:50.364 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2005663_0 00:04:50.364 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:50.364 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:50.364 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:50.364 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:50.364 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:50.364 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2005663 00:04:50.364 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:50.364 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2005663 00:04:50.364 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:50.364 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2005663 00:04:50.364 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:50.364 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:50.364 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:50.364 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:50.364 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:50.364 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:50.364 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:50.364 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:50.364 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:50.364 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2005663 00:04:50.364 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:50.364 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2005663 00:04:50.364 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:50.364 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2005663 00:04:50.364 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:50.364 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2005663 00:04:50.364 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:50.364 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2005663 00:04:50.364 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:50.365 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:50.365 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:50.365 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:50.365 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:50.365 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:50.365 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:50.365 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2005663 00:04:50.365 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:50.365 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:50.365 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:50.365 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:50.365 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:50.365 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2005663 00:04:50.365 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:50.365 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:50.365 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:50.365 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2005663 00:04:50.365 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:50.365 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2005663 00:04:50.365 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:50.365 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:50.365 16:21:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:50.365 16:21:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2005663 00:04:50.365 16:21:29 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2005663 ']' 00:04:50.365 16:21:29 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2005663 00:04:50.365 16:21:29 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:50.365 16:21:29 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:50.365 16:21:29 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2005663 00:04:50.365 16:21:29 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:50.365 16:21:29 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:50.365 16:21:29 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2005663' 00:04:50.365 killing process with pid 2005663 00:04:50.365 16:21:29 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2005663 00:04:50.365 16:21:29 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2005663 00:04:50.623 00:04:50.623 real 0m1.430s 00:04:50.623 user 0m1.474s 00:04:50.623 sys 0m0.440s 00:04:50.623 16:21:30 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.623 16:21:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:50.623 ************************************ 00:04:50.623 END TEST dpdk_mem_utility 00:04:50.623 ************************************ 00:04:50.623 16:21:30 -- common/autotest_common.sh@1142 -- # return 0 00:04:50.623 16:21:30 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:04:50.623 16:21:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.623 16:21:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.623 16:21:30 -- common/autotest_common.sh@10 -- # set +x 00:04:50.623 ************************************ 00:04:50.623 START TEST event 00:04:50.623 ************************************ 00:04:50.623 16:21:30 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:04:50.881 * Looking for test storage... 00:04:50.881 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:04:50.881 16:21:30 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:50.881 16:21:30 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:50.881 16:21:30 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:50.881 16:21:30 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:50.881 16:21:30 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.881 16:21:30 event -- common/autotest_common.sh@10 -- # set +x 00:04:50.881 ************************************ 00:04:50.881 START TEST event_perf 00:04:50.881 ************************************ 00:04:50.881 16:21:30 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:50.881 Running I/O for 1 seconds...[2024-07-15 16:21:30.357973] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:50.881 [2024-07-15 16:21:30.358061] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2005994 ] 00:04:50.881 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.881 [2024-07-15 16:21:30.429837] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:51.139 [2024-07-15 16:21:30.506621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.139 [2024-07-15 16:21:30.506642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:51.139 [2024-07-15 16:21:30.506729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:51.139 [2024-07-15 16:21:30.506731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.072 Running I/O for 1 seconds... 00:04:52.072 lcore 0: 194403 00:04:52.072 lcore 1: 194403 00:04:52.072 lcore 2: 194403 00:04:52.072 lcore 3: 194404 00:04:52.072 done. 00:04:52.072 00:04:52.072 real 0m1.231s 00:04:52.072 user 0m4.135s 00:04:52.072 sys 0m0.093s 00:04:52.072 16:21:31 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.072 16:21:31 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:52.072 ************************************ 00:04:52.072 END TEST event_perf 00:04:52.072 ************************************ 00:04:52.072 16:21:31 event -- common/autotest_common.sh@1142 -- # return 0 00:04:52.072 16:21:31 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:52.072 16:21:31 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:52.072 16:21:31 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.072 16:21:31 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.072 ************************************ 00:04:52.072 START TEST event_reactor 00:04:52.072 ************************************ 00:04:52.072 16:21:31 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:52.072 [2024-07-15 16:21:31.661785] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:52.072 [2024-07-15 16:21:31.661876] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2006279 ] 00:04:52.330 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.330 [2024-07-15 16:21:31.750461] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.330 [2024-07-15 16:21:31.820358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.307 test_start 00:04:53.307 oneshot 00:04:53.307 tick 100 00:04:53.307 tick 100 00:04:53.307 tick 250 00:04:53.307 tick 100 00:04:53.307 tick 100 00:04:53.307 tick 100 00:04:53.307 tick 250 00:04:53.307 tick 500 00:04:53.307 tick 100 00:04:53.307 tick 100 00:04:53.307 tick 250 00:04:53.307 tick 100 00:04:53.307 tick 100 00:04:53.307 test_end 00:04:53.307 00:04:53.307 real 0m1.241s 00:04:53.307 user 0m1.133s 00:04:53.307 sys 0m0.105s 00:04:53.307 16:21:32 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.307 16:21:32 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:53.307 ************************************ 00:04:53.307 END TEST event_reactor 00:04:53.307 ************************************ 00:04:53.565 16:21:32 event -- common/autotest_common.sh@1142 -- # return 0 00:04:53.565 16:21:32 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:53.565 16:21:32 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:53.565 16:21:32 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.565 16:21:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.565 ************************************ 00:04:53.565 START TEST event_reactor_perf 00:04:53.565 ************************************ 00:04:53.565 16:21:32 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:53.565 [2024-07-15 16:21:32.982164] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:53.565 [2024-07-15 16:21:32.982246] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2006537 ] 00:04:53.565 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.565 [2024-07-15 16:21:33.054270] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.565 [2024-07-15 16:21:33.125559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.939 test_start 00:04:54.939 test_end 00:04:54.939 Performance: 980503 events per second 00:04:54.939 00:04:54.939 real 0m1.226s 00:04:54.939 user 0m1.130s 00:04:54.939 sys 0m0.092s 00:04:54.939 16:21:34 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.939 16:21:34 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:54.939 ************************************ 00:04:54.939 END TEST event_reactor_perf 00:04:54.939 ************************************ 00:04:54.939 16:21:34 event -- common/autotest_common.sh@1142 -- # return 0 00:04:54.939 16:21:34 event -- event/event.sh@49 -- # uname -s 00:04:54.939 16:21:34 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:54.939 16:21:34 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:54.939 16:21:34 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.939 16:21:34 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.939 16:21:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:54.939 ************************************ 00:04:54.939 START TEST event_scheduler 00:04:54.939 ************************************ 00:04:54.939 16:21:34 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:54.939 * Looking for test storage... 00:04:54.939 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:04:54.939 16:21:34 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:54.939 16:21:34 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2006777 00:04:54.939 16:21:34 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.939 16:21:34 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:54.939 16:21:34 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2006777 00:04:54.939 16:21:34 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2006777 ']' 00:04:54.939 16:21:34 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.939 16:21:34 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:54.939 16:21:34 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.939 16:21:34 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:54.939 16:21:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.939 [2024-07-15 16:21:34.400359] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:54.939 [2024-07-15 16:21:34.400432] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2006777 ] 00:04:54.939 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.939 [2024-07-15 16:21:34.468279] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:55.197 [2024-07-15 16:21:34.550666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.198 [2024-07-15 16:21:34.550761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.198 [2024-07-15 16:21:34.550854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:55.198 [2024-07-15 16:21:34.550855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:55.763 16:21:35 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:55.763 16:21:35 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:55.763 16:21:35 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:55.763 16:21:35 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.763 16:21:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:55.763 [2024-07-15 16:21:35.241273] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:55.763 [2024-07-15 16:21:35.241295] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:55.763 [2024-07-15 16:21:35.241306] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:55.763 [2024-07-15 16:21:35.241314] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:55.763 [2024-07-15 16:21:35.241321] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:55.763 16:21:35 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.763 16:21:35 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:55.763 16:21:35 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.763 16:21:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:55.763 [2024-07-15 16:21:35.311340] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:55.763 16:21:35 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.763 16:21:35 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:55.763 16:21:35 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.763 16:21:35 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.763 16:21:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:55.763 ************************************ 00:04:55.763 START TEST scheduler_create_thread 00:04:55.763 ************************************ 00:04:55.763 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:55.763 16:21:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:55.763 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.763 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.021 2 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.021 3 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.021 4 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.021 5 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.021 6 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.021 7 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.021 8 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.021 9 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.021 10 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.021 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.587 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.588 16:21:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:56.588 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.588 16:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.013 16:21:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.013 16:21:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:58.013 16:21:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:58.013 16:21:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.013 16:21:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.945 16:21:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.945 00:04:58.945 real 0m3.102s 00:04:58.945 user 0m0.023s 00:04:58.945 sys 0m0.008s 00:04:58.945 16:21:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.945 16:21:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.945 ************************************ 00:04:58.945 END TEST scheduler_create_thread 00:04:58.945 ************************************ 00:04:58.945 16:21:38 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:58.945 16:21:38 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:58.945 16:21:38 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2006777 00:04:58.945 16:21:38 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2006777 ']' 00:04:58.945 16:21:38 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2006777 00:04:58.945 16:21:38 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:58.945 16:21:38 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:58.945 16:21:38 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2006777 00:04:59.203 16:21:38 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:59.203 16:21:38 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:59.203 16:21:38 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2006777' 00:04:59.203 killing process with pid 2006777 00:04:59.203 16:21:38 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2006777 00:04:59.203 16:21:38 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2006777 00:04:59.461 [2024-07-15 16:21:38.834344] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:59.461 00:04:59.461 real 0m4.764s 00:04:59.461 user 0m9.281s 00:04:59.461 sys 0m0.432s 00:04:59.461 16:21:39 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.461 16:21:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.461 ************************************ 00:04:59.461 END TEST event_scheduler 00:04:59.461 ************************************ 00:04:59.720 16:21:39 event -- common/autotest_common.sh@1142 -- # return 0 00:04:59.720 16:21:39 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:59.720 16:21:39 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:59.720 16:21:39 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.720 16:21:39 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.720 16:21:39 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.720 ************************************ 00:04:59.720 START TEST app_repeat 00:04:59.720 ************************************ 00:04:59.720 16:21:39 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:59.720 16:21:39 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.720 16:21:39 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.720 16:21:39 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:59.720 16:21:39 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.720 16:21:39 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:59.720 16:21:39 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:59.720 16:21:39 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:59.720 16:21:39 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2007701 00:04:59.720 16:21:39 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.720 16:21:39 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:59.720 16:21:39 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2007701' 00:04:59.720 Process app_repeat pid: 2007701 00:04:59.720 16:21:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:59.720 16:21:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:59.720 spdk_app_start Round 0 00:04:59.720 16:21:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2007701 /var/tmp/spdk-nbd.sock 00:04:59.720 16:21:39 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2007701 ']' 00:04:59.720 16:21:39 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:59.720 16:21:39 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.720 16:21:39 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:59.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:59.720 16:21:39 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.720 16:21:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.720 [2024-07-15 16:21:39.145800] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:59.720 [2024-07-15 16:21:39.145883] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2007701 ] 00:04:59.720 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.720 [2024-07-15 16:21:39.217049] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.720 [2024-07-15 16:21:39.295026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.720 [2024-07-15 16:21:39.295029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.655 16:21:39 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.655 16:21:39 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:00.655 16:21:39 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.655 Malloc0 00:05:00.655 16:21:40 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.913 Malloc1 00:05:00.913 16:21:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.913 16:21:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.913 16:21:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.913 16:21:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:00.913 16:21:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.913 16:21:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:00.913 16:21:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.913 16:21:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.913 16:21:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.913 16:21:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:00.913 16:21:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.913 16:21:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:00.913 16:21:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:00.913 16:21:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:00.913 16:21:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.913 16:21:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:01.172 /dev/nbd0 00:05:01.172 16:21:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:01.172 16:21:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:01.172 16:21:40 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:01.172 16:21:40 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:01.172 16:21:40 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:01.172 16:21:40 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:01.172 16:21:40 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:01.172 16:21:40 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:01.172 16:21:40 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:01.172 16:21:40 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:01.172 16:21:40 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.172 1+0 records in 00:05:01.172 1+0 records out 00:05:01.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269685 s, 15.2 MB/s 00:05:01.172 16:21:40 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:01.172 16:21:40 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:01.172 16:21:40 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:01.172 16:21:40 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:01.172 16:21:40 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:01.172 16:21:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.172 16:21:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.172 16:21:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:01.172 /dev/nbd1 00:05:01.172 16:21:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:01.432 16:21:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:01.432 16:21:40 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:01.432 16:21:40 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:01.432 16:21:40 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:01.432 16:21:40 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:01.432 16:21:40 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:01.432 16:21:40 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:01.432 16:21:40 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:01.432 16:21:40 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:01.432 16:21:40 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.432 1+0 records in 00:05:01.432 1+0 records out 00:05:01.432 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271637 s, 15.1 MB/s 00:05:01.432 16:21:40 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:01.432 16:21:40 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:01.432 16:21:40 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:01.432 16:21:40 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:01.432 16:21:40 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:01.432 16:21:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.432 16:21:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.432 16:21:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.432 16:21:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.432 16:21:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.432 16:21:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:01.432 { 00:05:01.432 "nbd_device": "/dev/nbd0", 00:05:01.432 "bdev_name": "Malloc0" 00:05:01.432 }, 00:05:01.432 { 00:05:01.432 "nbd_device": "/dev/nbd1", 00:05:01.432 "bdev_name": "Malloc1" 00:05:01.432 } 00:05:01.432 ]' 00:05:01.432 16:21:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.432 16:21:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:01.432 { 00:05:01.432 "nbd_device": "/dev/nbd0", 00:05:01.432 "bdev_name": "Malloc0" 00:05:01.432 }, 00:05:01.432 { 00:05:01.432 "nbd_device": "/dev/nbd1", 00:05:01.432 "bdev_name": "Malloc1" 00:05:01.432 } 00:05:01.432 ]' 00:05:01.432 16:21:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:01.432 /dev/nbd1' 00:05:01.432 16:21:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:01.432 /dev/nbd1' 00:05:01.432 16:21:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.432 16:21:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:01.432 16:21:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:01.432 16:21:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:01.432 16:21:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:01.432 16:21:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:01.432 16:21:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.432 16:21:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.432 16:21:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:01.432 16:21:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.432 16:21:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:01.432 16:21:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:01.691 256+0 records in 00:05:01.691 256+0 records out 00:05:01.691 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108798 s, 96.4 MB/s 00:05:01.691 16:21:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.691 16:21:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:01.691 256+0 records in 00:05:01.691 256+0 records out 00:05:01.691 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204068 s, 51.4 MB/s 00:05:01.691 16:21:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.691 16:21:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:01.691 256+0 records in 00:05:01.691 256+0 records out 00:05:01.691 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221981 s, 47.2 MB/s 00:05:01.691 16:21:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:01.691 16:21:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.691 16:21:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.691 16:21:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:01.691 16:21:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.691 16:21:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:01.691 16:21:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:01.691 16:21:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.691 16:21:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:01.691 16:21:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.691 16:21:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:01.691 16:21:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.691 16:21:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:01.691 16:21:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.691 16:21:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.691 16:21:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:01.691 16:21:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:01.691 16:21:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.691 16:21:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:01.691 16:21:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:01.950 16:21:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:01.950 16:21:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:01.950 16:21:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.950 16:21:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.950 16:21:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:01.950 16:21:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.950 16:21:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.950 16:21:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.950 16:21:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:01.950 16:21:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:01.950 16:21:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:01.950 16:21:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:01.950 16:21:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.950 16:21:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.950 16:21:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:01.950 16:21:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.950 16:21:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.950 16:21:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.950 16:21:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.950 16:21:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.209 16:21:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:02.209 16:21:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:02.209 16:21:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.209 16:21:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:02.209 16:21:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:02.209 16:21:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.209 16:21:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:02.209 16:21:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:02.209 16:21:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:02.209 16:21:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:02.209 16:21:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:02.209 16:21:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:02.209 16:21:41 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:02.467 16:21:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:02.726 [2024-07-15 16:21:42.087814] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.726 [2024-07-15 16:21:42.154427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.726 [2024-07-15 16:21:42.154429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.726 [2024-07-15 16:21:42.193565] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:02.726 [2024-07-15 16:21:42.193608] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:06.010 16:21:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:06.010 16:21:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:06.010 spdk_app_start Round 1 00:05:06.010 16:21:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2007701 /var/tmp/spdk-nbd.sock 00:05:06.010 16:21:44 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2007701 ']' 00:05:06.010 16:21:44 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:06.010 16:21:44 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:06.010 16:21:44 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:06.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:06.010 16:21:44 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:06.010 16:21:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:06.010 16:21:45 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:06.010 16:21:45 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:06.010 16:21:45 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.010 Malloc0 00:05:06.010 16:21:45 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.010 Malloc1 00:05:06.010 16:21:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.010 16:21:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.010 16:21:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.010 16:21:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:06.010 16:21:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.010 16:21:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:06.010 16:21:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.010 16:21:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.010 16:21:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.010 16:21:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:06.010 16:21:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.010 16:21:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:06.010 16:21:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:06.010 16:21:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:06.010 16:21:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.010 16:21:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:06.010 /dev/nbd0 00:05:06.269 16:21:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:06.269 16:21:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:06.269 16:21:45 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:06.269 16:21:45 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:06.269 16:21:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:06.269 16:21:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:06.269 16:21:45 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:06.269 16:21:45 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:06.269 16:21:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:06.269 16:21:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:06.269 16:21:45 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.269 1+0 records in 00:05:06.269 1+0 records out 00:05:06.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249814 s, 16.4 MB/s 00:05:06.269 16:21:45 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:06.269 16:21:45 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:06.269 16:21:45 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:06.269 16:21:45 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:06.269 16:21:45 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:06.269 16:21:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.269 16:21:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.269 16:21:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:06.269 /dev/nbd1 00:05:06.269 16:21:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:06.269 16:21:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:06.269 16:21:45 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:06.269 16:21:45 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:06.269 16:21:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:06.269 16:21:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:06.269 16:21:45 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:06.269 16:21:45 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:06.269 16:21:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:06.269 16:21:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:06.269 16:21:45 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.269 1+0 records in 00:05:06.269 1+0 records out 00:05:06.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186956 s, 21.9 MB/s 00:05:06.269 16:21:45 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:06.269 16:21:45 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:06.269 16:21:45 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:06.269 16:21:45 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:06.269 16:21:45 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:06.269 16:21:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.269 16:21:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.269 16:21:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.269 16:21:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.269 16:21:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.528 16:21:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:06.528 { 00:05:06.528 "nbd_device": "/dev/nbd0", 00:05:06.528 "bdev_name": "Malloc0" 00:05:06.528 }, 00:05:06.528 { 00:05:06.528 "nbd_device": "/dev/nbd1", 00:05:06.528 "bdev_name": "Malloc1" 00:05:06.528 } 00:05:06.528 ]' 00:05:06.528 16:21:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:06.528 { 00:05:06.528 "nbd_device": "/dev/nbd0", 00:05:06.528 "bdev_name": "Malloc0" 00:05:06.528 }, 00:05:06.528 { 00:05:06.528 "nbd_device": "/dev/nbd1", 00:05:06.528 "bdev_name": "Malloc1" 00:05:06.528 } 00:05:06.528 ]' 00:05:06.528 16:21:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.528 16:21:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:06.528 /dev/nbd1' 00:05:06.528 16:21:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.528 16:21:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:06.528 /dev/nbd1' 00:05:06.528 16:21:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:06.528 16:21:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:06.528 16:21:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:06.528 16:21:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:06.528 16:21:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:06.528 16:21:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.528 16:21:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.528 16:21:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:06.528 16:21:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.528 16:21:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:06.528 16:21:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:06.528 256+0 records in 00:05:06.528 256+0 records out 00:05:06.528 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113815 s, 92.1 MB/s 00:05:06.528 16:21:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.528 16:21:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:06.528 256+0 records in 00:05:06.528 256+0 records out 00:05:06.528 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202364 s, 51.8 MB/s 00:05:06.528 16:21:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:06.787 256+0 records in 00:05:06.787 256+0 records out 00:05:06.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217746 s, 48.2 MB/s 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.787 16:21:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:07.046 16:21:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:07.046 16:21:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:07.046 16:21:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:07.046 16:21:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.046 16:21:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.046 16:21:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:07.046 16:21:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.046 16:21:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.046 16:21:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.046 16:21:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.046 16:21:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.305 16:21:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:07.305 16:21:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:07.305 16:21:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.305 16:21:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:07.305 16:21:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:07.305 16:21:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.305 16:21:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:07.305 16:21:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:07.305 16:21:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:07.305 16:21:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:07.305 16:21:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:07.305 16:21:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:07.305 16:21:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:07.563 16:21:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:07.563 [2024-07-15 16:21:47.140485] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:07.821 [2024-07-15 16:21:47.208090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.821 [2024-07-15 16:21:47.208093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.821 [2024-07-15 16:21:47.248315] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:07.821 [2024-07-15 16:21:47.248359] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:11.116 16:21:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:11.116 16:21:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:11.116 spdk_app_start Round 2 00:05:11.116 16:21:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2007701 /var/tmp/spdk-nbd.sock 00:05:11.116 16:21:49 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2007701 ']' 00:05:11.116 16:21:49 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:11.116 16:21:49 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.116 16:21:49 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:11.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:11.116 16:21:49 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.116 16:21:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.116 16:21:50 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.116 16:21:50 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:11.116 16:21:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.116 Malloc0 00:05:11.116 16:21:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.116 Malloc1 00:05:11.116 16:21:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.116 16:21:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.116 16:21:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.116 16:21:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:11.116 16:21:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.116 16:21:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:11.116 16:21:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.116 16:21:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.116 16:21:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.116 16:21:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:11.117 16:21:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.117 16:21:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:11.117 16:21:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:11.117 16:21:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:11.117 16:21:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.117 16:21:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:11.117 /dev/nbd0 00:05:11.117 16:21:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:11.117 16:21:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:11.117 16:21:50 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:11.117 16:21:50 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:11.117 16:21:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:11.117 16:21:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:11.117 16:21:50 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:11.117 16:21:50 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:11.117 16:21:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:11.117 16:21:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:11.117 16:21:50 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.117 1+0 records in 00:05:11.117 1+0 records out 00:05:11.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000149138 s, 27.5 MB/s 00:05:11.117 16:21:50 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:11.117 16:21:50 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:11.117 16:21:50 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:11.117 16:21:50 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:11.117 16:21:50 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:11.117 16:21:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.117 16:21:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.117 16:21:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:11.375 /dev/nbd1 00:05:11.375 16:21:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:11.375 16:21:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:11.375 16:21:50 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:11.375 16:21:50 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:11.375 16:21:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:11.375 16:21:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:11.375 16:21:50 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:11.375 16:21:50 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:11.376 16:21:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:11.376 16:21:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:11.376 16:21:50 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.376 1+0 records in 00:05:11.376 1+0 records out 00:05:11.376 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261224 s, 15.7 MB/s 00:05:11.376 16:21:50 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:11.376 16:21:50 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:11.376 16:21:50 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:11.376 16:21:50 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:11.376 16:21:50 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:11.376 16:21:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.376 16:21:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.376 16:21:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.376 16:21:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.376 16:21:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:11.634 { 00:05:11.634 "nbd_device": "/dev/nbd0", 00:05:11.634 "bdev_name": "Malloc0" 00:05:11.634 }, 00:05:11.634 { 00:05:11.634 "nbd_device": "/dev/nbd1", 00:05:11.634 "bdev_name": "Malloc1" 00:05:11.634 } 00:05:11.634 ]' 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:11.634 { 00:05:11.634 "nbd_device": "/dev/nbd0", 00:05:11.634 "bdev_name": "Malloc0" 00:05:11.634 }, 00:05:11.634 { 00:05:11.634 "nbd_device": "/dev/nbd1", 00:05:11.634 "bdev_name": "Malloc1" 00:05:11.634 } 00:05:11.634 ]' 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:11.634 /dev/nbd1' 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:11.634 /dev/nbd1' 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:11.634 256+0 records in 00:05:11.634 256+0 records out 00:05:11.634 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103185 s, 102 MB/s 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:11.634 256+0 records in 00:05:11.634 256+0 records out 00:05:11.634 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206528 s, 50.8 MB/s 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:11.634 256+0 records in 00:05:11.634 256+0 records out 00:05:11.634 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216755 s, 48.4 MB/s 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:11.634 16:21:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:11.900 16:21:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:11.900 16:21:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.900 16:21:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.900 16:21:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:11.900 16:21:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:11.900 16:21:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.900 16:21:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:11.900 16:21:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:11.900 16:21:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:11.900 16:21:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:11.900 16:21:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.900 16:21:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.900 16:21:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:11.900 16:21:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.900 16:21:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.900 16:21:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.900 16:21:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:12.162 16:21:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:12.162 16:21:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:12.162 16:21:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:12.162 16:21:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.162 16:21:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.162 16:21:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:12.162 16:21:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.162 16:21:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.162 16:21:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.162 16:21:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.162 16:21:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.420 16:21:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:12.420 16:21:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:12.420 16:21:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.420 16:21:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:12.420 16:21:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:12.420 16:21:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.420 16:21:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:12.420 16:21:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:12.420 16:21:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:12.420 16:21:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:12.420 16:21:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:12.420 16:21:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:12.420 16:21:51 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:12.678 16:21:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:12.678 [2024-07-15 16:21:52.226572] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.935 [2024-07-15 16:21:52.296102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.935 [2024-07-15 16:21:52.296105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.935 [2024-07-15 16:21:52.335662] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:12.935 [2024-07-15 16:21:52.335699] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:15.464 16:21:55 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2007701 /var/tmp/spdk-nbd.sock 00:05:15.465 16:21:55 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2007701 ']' 00:05:15.465 16:21:55 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.465 16:21:55 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.465 16:21:55 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.465 16:21:55 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.465 16:21:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:15.724 16:21:55 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.724 16:21:55 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:15.724 16:21:55 event.app_repeat -- event/event.sh@39 -- # killprocess 2007701 00:05:15.724 16:21:55 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2007701 ']' 00:05:15.724 16:21:55 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2007701 00:05:15.724 16:21:55 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:15.724 16:21:55 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:15.724 16:21:55 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2007701 00:05:15.724 16:21:55 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:15.724 16:21:55 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:15.724 16:21:55 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2007701' 00:05:15.724 killing process with pid 2007701 00:05:15.724 16:21:55 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2007701 00:05:15.724 16:21:55 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2007701 00:05:15.984 spdk_app_start is called in Round 0. 00:05:15.984 Shutdown signal received, stop current app iteration 00:05:15.984 Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 reinitialization... 00:05:15.984 spdk_app_start is called in Round 1. 00:05:15.984 Shutdown signal received, stop current app iteration 00:05:15.984 Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 reinitialization... 00:05:15.984 spdk_app_start is called in Round 2. 00:05:15.984 Shutdown signal received, stop current app iteration 00:05:15.984 Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 reinitialization... 00:05:15.984 spdk_app_start is called in Round 3. 00:05:15.984 Shutdown signal received, stop current app iteration 00:05:15.984 16:21:55 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:15.984 16:21:55 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:15.984 00:05:15.984 real 0m16.309s 00:05:15.984 user 0m34.549s 00:05:15.984 sys 0m3.111s 00:05:15.984 16:21:55 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.984 16:21:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:15.984 ************************************ 00:05:15.984 END TEST app_repeat 00:05:15.984 ************************************ 00:05:15.984 16:21:55 event -- common/autotest_common.sh@1142 -- # return 0 00:05:15.984 16:21:55 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:15.984 16:21:55 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:15.984 16:21:55 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.984 16:21:55 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.984 16:21:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.984 ************************************ 00:05:15.984 START TEST cpu_locks 00:05:15.984 ************************************ 00:05:15.984 16:21:55 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:16.243 * Looking for test storage... 00:05:16.243 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:16.243 16:21:55 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:16.243 16:21:55 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:16.243 16:21:55 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:16.243 16:21:55 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:16.243 16:21:55 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.243 16:21:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.243 16:21:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.243 ************************************ 00:05:16.243 START TEST default_locks 00:05:16.243 ************************************ 00:05:16.243 16:21:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:16.243 16:21:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2010683 00:05:16.243 16:21:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2010683 00:05:16.243 16:21:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:16.243 16:21:55 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2010683 ']' 00:05:16.243 16:21:55 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.243 16:21:55 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.243 16:21:55 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.243 16:21:55 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.243 16:21:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.243 [2024-07-15 16:21:55.691914] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:16.243 [2024-07-15 16:21:55.691983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2010683 ] 00:05:16.243 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.243 [2024-07-15 16:21:55.761401] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.243 [2024-07-15 16:21:55.833932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.190 16:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.190 16:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:17.190 16:21:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2010683 00:05:17.190 16:21:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2010683 00:05:17.190 16:21:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.762 lslocks: write error 00:05:17.762 16:21:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2010683 00:05:17.762 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 2010683 ']' 00:05:17.762 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 2010683 00:05:17.762 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:17.762 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.762 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2010683 00:05:17.762 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.762 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.762 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2010683' 00:05:17.762 killing process with pid 2010683 00:05:17.762 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 2010683 00:05:17.762 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 2010683 00:05:18.020 16:21:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2010683 00:05:18.020 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:18.020 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2010683 00:05:18.020 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:18.020 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:18.020 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:18.020 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:18.020 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2010683 00:05:18.020 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2010683 ']' 00:05:18.020 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.020 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.020 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.020 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.020 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.020 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2010683) - No such process 00:05:18.020 ERROR: process (pid: 2010683) is no longer running 00:05:18.020 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.020 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:18.020 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:18.020 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:18.020 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:18.020 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:18.020 16:21:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:18.020 16:21:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:18.020 16:21:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:18.020 16:21:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:18.020 00:05:18.020 real 0m1.917s 00:05:18.020 user 0m2.006s 00:05:18.020 sys 0m0.732s 00:05:18.020 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.020 16:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.020 ************************************ 00:05:18.020 END TEST default_locks 00:05:18.020 ************************************ 00:05:18.279 16:21:57 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:18.279 16:21:57 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:18.279 16:21:57 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.279 16:21:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.279 16:21:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.279 ************************************ 00:05:18.279 START TEST default_locks_via_rpc 00:05:18.279 ************************************ 00:05:18.279 16:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:18.279 16:21:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2011189 00:05:18.279 16:21:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2011189 00:05:18.279 16:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2011189 ']' 00:05:18.279 16:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.279 16:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.279 16:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.279 16:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.279 16:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.279 16:21:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.279 [2024-07-15 16:21:57.679966] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:18.279 [2024-07-15 16:21:57.680023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2011189 ] 00:05:18.279 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.279 [2024-07-15 16:21:57.747456] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.279 [2024-07-15 16:21:57.824983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.231 16:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.232 16:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:19.232 16:21:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:19.232 16:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.232 16:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.232 16:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.232 16:21:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:19.232 16:21:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:19.232 16:21:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:19.232 16:21:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:19.232 16:21:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:19.232 16:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.232 16:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.232 16:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.232 16:21:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2011189 00:05:19.232 16:21:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2011189 00:05:19.232 16:21:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.811 16:21:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2011189 00:05:19.811 16:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 2011189 ']' 00:05:19.811 16:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 2011189 00:05:19.811 16:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:19.811 16:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:19.811 16:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2011189 00:05:19.811 16:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:19.811 16:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:19.811 16:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2011189' 00:05:19.811 killing process with pid 2011189 00:05:19.811 16:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 2011189 00:05:19.811 16:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 2011189 00:05:20.070 00:05:20.070 real 0m1.906s 00:05:20.070 user 0m1.979s 00:05:20.070 sys 0m0.664s 00:05:20.070 16:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.070 16:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.070 ************************************ 00:05:20.070 END TEST default_locks_via_rpc 00:05:20.070 ************************************ 00:05:20.070 16:21:59 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:20.070 16:21:59 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:20.070 16:21:59 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.070 16:21:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.070 16:21:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.070 ************************************ 00:05:20.070 START TEST non_locking_app_on_locked_coremask 00:05:20.070 ************************************ 00:05:20.070 16:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:20.070 16:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2011488 00:05:20.070 16:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2011488 /var/tmp/spdk.sock 00:05:20.070 16:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2011488 ']' 00:05:20.070 16:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.070 16:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:20.070 16:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.070 16:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.070 16:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.070 16:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.070 [2024-07-15 16:21:59.642275] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:20.070 [2024-07-15 16:21:59.642316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2011488 ] 00:05:20.328 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.328 [2024-07-15 16:21:59.708601] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.328 [2024-07-15 16:21:59.785599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.895 16:22:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.895 16:22:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:20.895 16:22:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2011638 00:05:20.895 16:22:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2011638 /var/tmp/spdk2.sock 00:05:20.895 16:22:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2011638 ']' 00:05:20.895 16:22:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.895 16:22:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.895 16:22:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.895 16:22:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.895 16:22:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.895 16:22:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:20.895 [2024-07-15 16:22:00.480846] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:20.895 [2024-07-15 16:22:00.480912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2011638 ] 00:05:21.165 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.165 [2024-07-15 16:22:00.569724] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:21.165 [2024-07-15 16:22:00.569746] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.165 [2024-07-15 16:22:00.713629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.732 16:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.732 16:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:21.732 16:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2011488 00:05:21.732 16:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2011488 00:05:21.732 16:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:22.666 lslocks: write error 00:05:22.666 16:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2011488 00:05:22.666 16:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2011488 ']' 00:05:22.666 16:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2011488 00:05:22.666 16:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:22.666 16:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:22.666 16:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2011488 00:05:22.666 16:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:22.666 16:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:22.666 16:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2011488' 00:05:22.666 killing process with pid 2011488 00:05:22.666 16:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2011488 00:05:22.666 16:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2011488 00:05:23.232 16:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2011638 00:05:23.232 16:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2011638 ']' 00:05:23.232 16:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2011638 00:05:23.232 16:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:23.233 16:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.233 16:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2011638 00:05:23.492 16:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.492 16:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.492 16:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2011638' 00:05:23.492 killing process with pid 2011638 00:05:23.492 16:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2011638 00:05:23.492 16:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2011638 00:05:23.751 00:05:23.751 real 0m3.510s 00:05:23.751 user 0m3.773s 00:05:23.751 sys 0m1.112s 00:05:23.751 16:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.751 16:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.751 ************************************ 00:05:23.751 END TEST non_locking_app_on_locked_coremask 00:05:23.751 ************************************ 00:05:23.751 16:22:03 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:23.751 16:22:03 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:23.751 16:22:03 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.751 16:22:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.751 16:22:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.751 ************************************ 00:05:23.751 START TEST locking_app_on_unlocked_coremask 00:05:23.751 ************************************ 00:05:23.751 16:22:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:23.751 16:22:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2012075 00:05:23.751 16:22:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2012075 /var/tmp/spdk.sock 00:05:23.751 16:22:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2012075 ']' 00:05:23.751 16:22:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.751 16:22:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.751 16:22:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.751 16:22:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.751 16:22:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.751 16:22:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:23.751 [2024-07-15 16:22:03.237801] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:23.751 [2024-07-15 16:22:03.237883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2012075 ] 00:05:23.751 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.751 [2024-07-15 16:22:03.307926] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:23.751 [2024-07-15 16:22:03.307951] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.011 [2024-07-15 16:22:03.385781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.575 16:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.575 16:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:24.575 16:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2012337 00:05:24.575 16:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2012337 /var/tmp/spdk2.sock 00:05:24.575 16:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2012337 ']' 00:05:24.575 16:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.575 16:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.575 16:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.575 16:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.575 16:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.575 16:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:24.575 [2024-07-15 16:22:04.078218] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:24.575 [2024-07-15 16:22:04.078282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2012337 ] 00:05:24.575 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.575 [2024-07-15 16:22:04.167973] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.833 [2024-07-15 16:22:04.315932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.400 16:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.400 16:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:25.400 16:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2012337 00:05:25.400 16:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:25.400 16:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2012337 00:05:26.778 lslocks: write error 00:05:26.778 16:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2012075 00:05:26.778 16:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2012075 ']' 00:05:26.778 16:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2012075 00:05:26.778 16:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:26.778 16:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.778 16:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2012075 00:05:26.778 16:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:26.778 16:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:26.778 16:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2012075' 00:05:26.778 killing process with pid 2012075 00:05:26.778 16:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2012075 00:05:26.778 16:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2012075 00:05:27.344 16:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2012337 00:05:27.344 16:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2012337 ']' 00:05:27.344 16:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2012337 00:05:27.344 16:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:27.344 16:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:27.344 16:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2012337 00:05:27.344 16:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:27.344 16:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:27.344 16:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2012337' 00:05:27.344 killing process with pid 2012337 00:05:27.344 16:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2012337 00:05:27.344 16:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2012337 00:05:27.602 00:05:27.602 real 0m3.790s 00:05:27.602 user 0m4.034s 00:05:27.602 sys 0m1.227s 00:05:27.602 16:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.602 16:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.602 ************************************ 00:05:27.602 END TEST locking_app_on_unlocked_coremask 00:05:27.602 ************************************ 00:05:27.602 16:22:07 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:27.602 16:22:07 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:27.602 16:22:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.602 16:22:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.603 16:22:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.603 ************************************ 00:05:27.603 START TEST locking_app_on_locked_coremask 00:05:27.603 ************************************ 00:05:27.603 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:27.603 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2012899 00:05:27.603 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2012899 /var/tmp/spdk.sock 00:05:27.603 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.603 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2012899 ']' 00:05:27.603 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.603 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.603 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.603 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.603 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.603 [2024-07-15 16:22:07.111437] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:27.603 [2024-07-15 16:22:07.111528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2012899 ] 00:05:27.603 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.603 [2024-07-15 16:22:07.179541] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.861 [2024-07-15 16:22:07.254872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.442 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.442 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:28.442 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:28.442 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2012933 00:05:28.442 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2012933 /var/tmp/spdk2.sock 00:05:28.442 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:28.442 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2012933 /var/tmp/spdk2.sock 00:05:28.442 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:28.442 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.442 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:28.442 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.442 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2012933 /var/tmp/spdk2.sock 00:05:28.442 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2012933 ']' 00:05:28.442 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.442 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.442 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.442 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.442 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.442 [2024-07-15 16:22:07.938702] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:28.442 [2024-07-15 16:22:07.938755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2012933 ] 00:05:28.442 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.442 [2024-07-15 16:22:08.033218] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2012899 has claimed it. 00:05:28.442 [2024-07-15 16:22:08.033259] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:29.009 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2012933) - No such process 00:05:29.009 ERROR: process (pid: 2012933) is no longer running 00:05:29.009 16:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.009 16:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:29.009 16:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:29.009 16:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:29.009 16:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:29.009 16:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:29.009 16:22:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2012899 00:05:29.267 16:22:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2012899 00:05:29.267 16:22:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.834 lslocks: write error 00:05:29.834 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2012899 00:05:29.834 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2012899 ']' 00:05:29.834 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2012899 00:05:29.834 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:29.834 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:29.834 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2012899 00:05:29.834 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:29.834 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:29.834 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2012899' 00:05:29.834 killing process with pid 2012899 00:05:29.834 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2012899 00:05:29.834 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2012899 00:05:30.095 00:05:30.095 real 0m2.588s 00:05:30.095 user 0m2.795s 00:05:30.095 sys 0m0.831s 00:05:30.095 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.095 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.095 ************************************ 00:05:30.095 END TEST locking_app_on_locked_coremask 00:05:30.095 ************************************ 00:05:30.355 16:22:09 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:30.355 16:22:09 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:30.355 16:22:09 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.355 16:22:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.355 16:22:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.355 ************************************ 00:05:30.355 START TEST locking_overlapped_coremask 00:05:30.355 ************************************ 00:05:30.355 16:22:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:30.355 16:22:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2013387 00:05:30.355 16:22:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2013387 /var/tmp/spdk.sock 00:05:30.355 16:22:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2013387 ']' 00:05:30.355 16:22:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.355 16:22:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.355 16:22:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.355 16:22:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.355 16:22:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.355 16:22:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:30.355 [2024-07-15 16:22:09.771495] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:30.355 [2024-07-15 16:22:09.771552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2013387 ] 00:05:30.355 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.355 [2024-07-15 16:22:09.839040] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:30.355 [2024-07-15 16:22:09.918856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.355 [2024-07-15 16:22:09.918874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.355 [2024-07-15 16:22:09.918876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.292 16:22:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.292 16:22:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:31.292 16:22:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2013484 00:05:31.292 16:22:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2013484 /var/tmp/spdk2.sock 00:05:31.292 16:22:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:31.292 16:22:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:31.292 16:22:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2013484 /var/tmp/spdk2.sock 00:05:31.292 16:22:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:31.292 16:22:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.292 16:22:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:31.292 16:22:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.292 16:22:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2013484 /var/tmp/spdk2.sock 00:05:31.292 16:22:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2013484 ']' 00:05:31.292 16:22:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.292 16:22:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.292 16:22:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.292 16:22:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.292 16:22:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.292 [2024-07-15 16:22:10.615663] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:31.292 [2024-07-15 16:22:10.615753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2013484 ] 00:05:31.293 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.293 [2024-07-15 16:22:10.710264] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2013387 has claimed it. 00:05:31.293 [2024-07-15 16:22:10.710299] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:31.862 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2013484) - No such process 00:05:31.862 ERROR: process (pid: 2013484) is no longer running 00:05:31.862 16:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.862 16:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:31.862 16:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:31.862 16:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:31.862 16:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:31.862 16:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:31.862 16:22:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:31.862 16:22:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:31.862 16:22:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:31.862 16:22:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:31.862 16:22:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2013387 00:05:31.862 16:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 2013387 ']' 00:05:31.862 16:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 2013387 00:05:31.862 16:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:31.862 16:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:31.862 16:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2013387 00:05:31.862 16:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:31.862 16:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:31.862 16:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2013387' 00:05:31.862 killing process with pid 2013387 00:05:31.862 16:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 2013387 00:05:31.862 16:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 2013387 00:05:32.121 00:05:32.121 real 0m1.877s 00:05:32.121 user 0m5.274s 00:05:32.121 sys 0m0.457s 00:05:32.121 16:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.121 16:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.121 ************************************ 00:05:32.121 END TEST locking_overlapped_coremask 00:05:32.121 ************************************ 00:05:32.121 16:22:11 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:32.121 16:22:11 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:32.121 16:22:11 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.122 16:22:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.122 16:22:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.122 ************************************ 00:05:32.122 START TEST locking_overlapped_coremask_via_rpc 00:05:32.122 ************************************ 00:05:32.122 16:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:32.122 16:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2013775 00:05:32.122 16:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2013775 /var/tmp/spdk.sock 00:05:32.122 16:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:32.122 16:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2013775 ']' 00:05:32.122 16:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.122 16:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.122 16:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.122 16:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.122 16:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.381 [2024-07-15 16:22:11.731894] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:32.381 [2024-07-15 16:22:11.731957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2013775 ] 00:05:32.381 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.381 [2024-07-15 16:22:11.800888] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:32.381 [2024-07-15 16:22:11.800915] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:32.381 [2024-07-15 16:22:11.869624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.381 [2024-07-15 16:22:11.869726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.381 [2024-07-15 16:22:11.869728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.320 16:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.320 16:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:33.320 16:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2013830 00:05:33.320 16:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2013830 /var/tmp/spdk2.sock 00:05:33.320 16:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:33.320 16:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2013830 ']' 00:05:33.320 16:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.320 16:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.320 16:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.320 16:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.320 16:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.320 [2024-07-15 16:22:12.575318] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:33.321 [2024-07-15 16:22:12.575404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2013830 ] 00:05:33.321 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.321 [2024-07-15 16:22:12.672817] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:33.321 [2024-07-15 16:22:12.672845] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:33.321 [2024-07-15 16:22:12.818239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:33.321 [2024-07-15 16:22:12.818358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.321 [2024-07-15 16:22:12.818358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:33.889 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.889 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:33.889 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:33.889 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.889 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.889 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.889 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:33.889 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:33.889 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:33.889 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:33.889 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.889 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:33.889 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.889 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:33.889 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.889 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.889 [2024-07-15 16:22:13.430520] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2013775 has claimed it. 00:05:33.889 request: 00:05:33.889 { 00:05:33.889 "method": "framework_enable_cpumask_locks", 00:05:33.889 "req_id": 1 00:05:33.889 } 00:05:33.889 Got JSON-RPC error response 00:05:33.889 response: 00:05:33.889 { 00:05:33.890 "code": -32603, 00:05:33.890 "message": "Failed to claim CPU core: 2" 00:05:33.890 } 00:05:33.890 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:33.890 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:33.890 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:33.890 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:33.890 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:33.890 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2013775 /var/tmp/spdk.sock 00:05:33.890 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2013775 ']' 00:05:33.890 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.890 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.890 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.890 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.890 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.148 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.148 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:34.148 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2013830 /var/tmp/spdk2.sock 00:05:34.148 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2013830 ']' 00:05:34.148 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.148 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.148 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.148 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.148 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.407 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.407 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:34.407 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:34.407 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:34.407 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:34.407 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:34.407 00:05:34.407 real 0m2.082s 00:05:34.407 user 0m0.804s 00:05:34.407 sys 0m0.206s 00:05:34.407 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.407 16:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.407 ************************************ 00:05:34.407 END TEST locking_overlapped_coremask_via_rpc 00:05:34.407 ************************************ 00:05:34.407 16:22:13 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:34.407 16:22:13 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:34.407 16:22:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2013775 ]] 00:05:34.407 16:22:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2013775 00:05:34.407 16:22:13 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2013775 ']' 00:05:34.407 16:22:13 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2013775 00:05:34.407 16:22:13 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:34.407 16:22:13 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:34.407 16:22:13 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2013775 00:05:34.407 16:22:13 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:34.407 16:22:13 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:34.407 16:22:13 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2013775' 00:05:34.407 killing process with pid 2013775 00:05:34.407 16:22:13 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2013775 00:05:34.407 16:22:13 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2013775 00:05:34.667 16:22:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2013830 ]] 00:05:34.667 16:22:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2013830 00:05:34.667 16:22:14 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2013830 ']' 00:05:34.667 16:22:14 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2013830 00:05:34.667 16:22:14 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:34.667 16:22:14 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:34.667 16:22:14 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2013830 00:05:34.667 16:22:14 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:34.667 16:22:14 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:34.667 16:22:14 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2013830' 00:05:34.667 killing process with pid 2013830 00:05:34.667 16:22:14 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2013830 00:05:34.667 16:22:14 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2013830 00:05:35.237 16:22:14 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:35.237 16:22:14 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:35.237 16:22:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2013775 ]] 00:05:35.237 16:22:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2013775 00:05:35.237 16:22:14 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2013775 ']' 00:05:35.237 16:22:14 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2013775 00:05:35.237 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2013775) - No such process 00:05:35.237 16:22:14 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2013775 is not found' 00:05:35.237 Process with pid 2013775 is not found 00:05:35.237 16:22:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2013830 ]] 00:05:35.237 16:22:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2013830 00:05:35.237 16:22:14 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2013830 ']' 00:05:35.237 16:22:14 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2013830 00:05:35.237 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2013830) - No such process 00:05:35.237 16:22:14 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2013830 is not found' 00:05:35.237 Process with pid 2013830 is not found 00:05:35.237 16:22:14 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:35.237 00:05:35.237 real 0m19.053s 00:05:35.237 user 0m31.210s 00:05:35.237 sys 0m6.264s 00:05:35.237 16:22:14 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.237 16:22:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.237 ************************************ 00:05:35.237 END TEST cpu_locks 00:05:35.237 ************************************ 00:05:35.237 16:22:14 event -- common/autotest_common.sh@1142 -- # return 0 00:05:35.237 00:05:35.237 real 0m44.415s 00:05:35.237 user 1m21.668s 00:05:35.237 sys 0m10.501s 00:05:35.237 16:22:14 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.237 16:22:14 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.237 ************************************ 00:05:35.237 END TEST event 00:05:35.237 ************************************ 00:05:35.237 16:22:14 -- common/autotest_common.sh@1142 -- # return 0 00:05:35.237 16:22:14 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:35.237 16:22:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.237 16:22:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.237 16:22:14 -- common/autotest_common.sh@10 -- # set +x 00:05:35.238 ************************************ 00:05:35.238 START TEST thread 00:05:35.238 ************************************ 00:05:35.238 16:22:14 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:35.238 * Looking for test storage... 00:05:35.238 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:05:35.238 16:22:14 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:35.238 16:22:14 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:35.238 16:22:14 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.238 16:22:14 thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.238 ************************************ 00:05:35.238 START TEST thread_poller_perf 00:05:35.238 ************************************ 00:05:35.238 16:22:14 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:35.497 [2024-07-15 16:22:14.834273] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:35.498 [2024-07-15 16:22:14.834354] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2014416 ] 00:05:35.498 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.498 [2024-07-15 16:22:14.905394] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.498 [2024-07-15 16:22:14.976486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.498 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:36.875 ====================================== 00:05:36.875 busy:2504953344 (cyc) 00:05:36.875 total_run_count: 870000 00:05:36.875 tsc_hz: 2500000000 (cyc) 00:05:36.875 ====================================== 00:05:36.875 poller_cost: 2879 (cyc), 1151 (nsec) 00:05:36.875 00:05:36.875 real 0m1.226s 00:05:36.875 user 0m1.128s 00:05:36.875 sys 0m0.094s 00:05:36.875 16:22:16 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.875 16:22:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:36.875 ************************************ 00:05:36.875 END TEST thread_poller_perf 00:05:36.875 ************************************ 00:05:36.875 16:22:16 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:36.875 16:22:16 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:36.875 16:22:16 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:36.875 16:22:16 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.875 16:22:16 thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.875 ************************************ 00:05:36.875 START TEST thread_poller_perf 00:05:36.875 ************************************ 00:05:36.875 16:22:16 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:36.875 [2024-07-15 16:22:16.138974] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:36.875 [2024-07-15 16:22:16.139053] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2014699 ] 00:05:36.875 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.875 [2024-07-15 16:22:16.209655] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.875 [2024-07-15 16:22:16.279777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.875 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:37.811 ====================================== 00:05:37.811 busy:2501494388 (cyc) 00:05:37.811 total_run_count: 14572000 00:05:37.811 tsc_hz: 2500000000 (cyc) 00:05:37.811 ====================================== 00:05:37.811 poller_cost: 171 (cyc), 68 (nsec) 00:05:37.811 00:05:37.811 real 0m1.223s 00:05:37.811 user 0m1.127s 00:05:37.811 sys 0m0.093s 00:05:37.811 16:22:17 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.811 16:22:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:37.811 ************************************ 00:05:37.811 END TEST thread_poller_perf 00:05:37.811 ************************************ 00:05:37.811 16:22:17 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:37.811 16:22:17 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:05:37.811 16:22:17 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:37.811 16:22:17 thread -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.811 16:22:17 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.811 16:22:17 thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.069 ************************************ 00:05:38.069 START TEST thread_spdk_lock 00:05:38.069 ************************************ 00:05:38.069 16:22:17 thread.thread_spdk_lock -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:38.069 [2024-07-15 16:22:17.435908] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:38.069 [2024-07-15 16:22:17.435988] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2014898 ] 00:05:38.069 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.069 [2024-07-15 16:22:17.506178] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.069 [2024-07-15 16:22:17.579465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.069 [2024-07-15 16:22:17.579471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.637 [2024-07-15 16:22:18.070538] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:38.637 [2024-07-15 16:22:18.070581] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:05:38.637 [2024-07-15 16:22:18.070591] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x14cdec0 00:05:38.637 [2024-07-15 16:22:18.071453] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:38.637 [2024-07-15 16:22:18.071558] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:38.637 [2024-07-15 16:22:18.071581] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:38.637 Starting test contend 00:05:38.637 Worker Delay Wait us Hold us Total us 00:05:38.637 0 3 178083 185928 364012 00:05:38.637 1 5 94250 286677 380928 00:05:38.637 PASS test contend 00:05:38.637 Starting test hold_by_poller 00:05:38.637 PASS test hold_by_poller 00:05:38.637 Starting test hold_by_message 00:05:38.637 PASS test hold_by_message 00:05:38.637 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:05:38.637 100014 assertions passed 00:05:38.637 0 assertions failed 00:05:38.637 00:05:38.637 real 0m0.716s 00:05:38.637 user 0m1.118s 00:05:38.637 sys 0m0.087s 00:05:38.637 16:22:18 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.637 16:22:18 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:05:38.637 ************************************ 00:05:38.637 END TEST thread_spdk_lock 00:05:38.637 ************************************ 00:05:38.637 16:22:18 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:38.637 00:05:38.637 real 0m3.497s 00:05:38.637 user 0m3.503s 00:05:38.637 sys 0m0.501s 00:05:38.637 16:22:18 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.637 16:22:18 thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.637 ************************************ 00:05:38.637 END TEST thread 00:05:38.637 ************************************ 00:05:38.637 16:22:18 -- common/autotest_common.sh@1142 -- # return 0 00:05:38.637 16:22:18 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:05:38.637 16:22:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.637 16:22:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.637 16:22:18 -- common/autotest_common.sh@10 -- # set +x 00:05:38.922 ************************************ 00:05:38.922 START TEST accel 00:05:38.922 ************************************ 00:05:38.922 16:22:18 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:05:38.922 * Looking for test storage... 00:05:38.922 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:05:38.922 16:22:18 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:38.922 16:22:18 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:38.922 16:22:18 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:38.922 16:22:18 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2015058 00:05:38.922 16:22:18 accel -- accel/accel.sh@63 -- # waitforlisten 2015058 00:05:38.922 16:22:18 accel -- common/autotest_common.sh@829 -- # '[' -z 2015058 ']' 00:05:38.922 16:22:18 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.922 16:22:18 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:38.922 16:22:18 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.922 16:22:18 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:38.922 16:22:18 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.922 16:22:18 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.922 16:22:18 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.922 16:22:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.922 16:22:18 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.922 16:22:18 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.922 16:22:18 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.922 16:22:18 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.922 16:22:18 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:38.922 16:22:18 accel -- accel/accel.sh@41 -- # jq -r . 00:05:38.922 [2024-07-15 16:22:18.394438] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:38.922 [2024-07-15 16:22:18.394508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2015058 ] 00:05:38.922 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.922 [2024-07-15 16:22:18.462569] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.181 [2024-07-15 16:22:18.535989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.749 16:22:19 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.749 16:22:19 accel -- common/autotest_common.sh@862 -- # return 0 00:05:39.749 16:22:19 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:39.749 16:22:19 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:39.749 16:22:19 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:39.749 16:22:19 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:39.749 16:22:19 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:39.749 16:22:19 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:39.749 16:22:19 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:39.749 16:22:19 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.749 16:22:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.749 16:22:19 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.749 16:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.749 16:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.749 16:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.749 16:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.749 16:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.749 16:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.749 16:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.749 16:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.749 16:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.749 16:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.749 16:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.749 16:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.749 16:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.749 16:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.749 16:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.749 16:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.749 16:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.749 16:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.749 16:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.749 16:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.749 16:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.749 16:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.749 16:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.749 16:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.749 16:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.749 16:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.749 16:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.749 16:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.749 16:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.749 16:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.749 16:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.749 16:22:19 accel -- accel/accel.sh@75 -- # killprocess 2015058 00:05:39.749 16:22:19 accel -- common/autotest_common.sh@948 -- # '[' -z 2015058 ']' 00:05:39.749 16:22:19 accel -- common/autotest_common.sh@952 -- # kill -0 2015058 00:05:39.749 16:22:19 accel -- common/autotest_common.sh@953 -- # uname 00:05:39.749 16:22:19 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:39.749 16:22:19 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2015058 00:05:39.749 16:22:19 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:39.749 16:22:19 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:39.749 16:22:19 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2015058' 00:05:39.749 killing process with pid 2015058 00:05:39.749 16:22:19 accel -- common/autotest_common.sh@967 -- # kill 2015058 00:05:39.749 16:22:19 accel -- common/autotest_common.sh@972 -- # wait 2015058 00:05:40.317 16:22:19 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:40.317 16:22:19 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:40.317 16:22:19 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:40.317 16:22:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.317 16:22:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.317 16:22:19 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:40.317 16:22:19 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:40.317 16:22:19 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:40.317 16:22:19 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.317 16:22:19 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.317 16:22:19 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.317 16:22:19 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.317 16:22:19 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.317 16:22:19 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:40.317 16:22:19 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:40.317 16:22:19 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.317 16:22:19 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:40.317 16:22:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:40.317 16:22:19 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:40.317 16:22:19 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:40.317 16:22:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.317 16:22:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.317 ************************************ 00:05:40.317 START TEST accel_missing_filename 00:05:40.317 ************************************ 00:05:40.317 16:22:19 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:40.317 16:22:19 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:40.317 16:22:19 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:40.317 16:22:19 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:40.317 16:22:19 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.317 16:22:19 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:40.317 16:22:19 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.317 16:22:19 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:40.317 16:22:19 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:40.317 16:22:19 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:40.317 16:22:19 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.317 16:22:19 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.317 16:22:19 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.318 16:22:19 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.318 16:22:19 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.318 16:22:19 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:40.318 16:22:19 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:40.318 [2024-07-15 16:22:19.798865] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:40.318 [2024-07-15 16:22:19.798972] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2015362 ] 00:05:40.318 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.318 [2024-07-15 16:22:19.869817] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.576 [2024-07-15 16:22:19.946016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.576 [2024-07-15 16:22:19.985957] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:40.576 [2024-07-15 16:22:20.046907] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:40.576 A filename is required. 00:05:40.576 16:22:20 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:40.576 16:22:20 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:40.576 16:22:20 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:40.576 16:22:20 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:40.576 16:22:20 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:40.576 16:22:20 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:40.576 00:05:40.576 real 0m0.342s 00:05:40.576 user 0m0.249s 00:05:40.576 sys 0m0.131s 00:05:40.576 16:22:20 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.576 16:22:20 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:40.576 ************************************ 00:05:40.576 END TEST accel_missing_filename 00:05:40.576 ************************************ 00:05:40.576 16:22:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:40.576 16:22:20 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:40.576 16:22:20 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:40.576 16:22:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.576 16:22:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.835 ************************************ 00:05:40.835 START TEST accel_compress_verify 00:05:40.835 ************************************ 00:05:40.835 16:22:20 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:40.835 16:22:20 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:40.835 16:22:20 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:40.835 16:22:20 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:40.835 16:22:20 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.835 16:22:20 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:40.835 16:22:20 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.835 16:22:20 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:40.835 16:22:20 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:40.835 16:22:20 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:40.835 16:22:20 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.835 16:22:20 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.835 16:22:20 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.835 16:22:20 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.835 16:22:20 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.835 16:22:20 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:40.835 16:22:20 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:40.835 [2024-07-15 16:22:20.222408] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:40.835 [2024-07-15 16:22:20.222525] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2015397 ] 00:05:40.835 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.835 [2024-07-15 16:22:20.293635] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.835 [2024-07-15 16:22:20.364890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.835 [2024-07-15 16:22:20.404501] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:41.094 [2024-07-15 16:22:20.464272] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:41.094 00:05:41.094 Compression does not support the verify option, aborting. 00:05:41.094 16:22:20 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:41.094 16:22:20 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:41.094 16:22:20 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:41.094 16:22:20 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:41.094 16:22:20 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:41.094 16:22:20 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:41.094 00:05:41.094 real 0m0.335s 00:05:41.094 user 0m0.245s 00:05:41.094 sys 0m0.125s 00:05:41.094 16:22:20 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.094 16:22:20 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:41.094 ************************************ 00:05:41.094 END TEST accel_compress_verify 00:05:41.094 ************************************ 00:05:41.094 16:22:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:41.094 16:22:20 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:41.094 16:22:20 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:41.094 16:22:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.094 16:22:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.094 ************************************ 00:05:41.094 START TEST accel_wrong_workload 00:05:41.094 ************************************ 00:05:41.094 16:22:20 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:41.094 16:22:20 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:41.094 16:22:20 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:41.094 16:22:20 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:41.094 16:22:20 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.094 16:22:20 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:41.094 16:22:20 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.094 16:22:20 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:41.094 16:22:20 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:41.094 16:22:20 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:41.094 16:22:20 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.094 16:22:20 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.094 16:22:20 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.094 16:22:20 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.094 16:22:20 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.094 16:22:20 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:41.094 16:22:20 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:41.094 Unsupported workload type: foobar 00:05:41.094 [2024-07-15 16:22:20.634538] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:41.094 accel_perf options: 00:05:41.094 [-h help message] 00:05:41.094 [-q queue depth per core] 00:05:41.095 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:41.095 [-T number of threads per core 00:05:41.095 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:41.095 [-t time in seconds] 00:05:41.095 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:41.095 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:41.095 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:41.095 [-l for compress/decompress workloads, name of uncompressed input file 00:05:41.095 [-S for crc32c workload, use this seed value (default 0) 00:05:41.095 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:41.095 [-f for fill workload, use this BYTE value (default 255) 00:05:41.095 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:41.095 [-y verify result if this switch is on] 00:05:41.095 [-a tasks to allocate per core (default: same value as -q)] 00:05:41.095 Can be used to spread operations across a wider range of memory. 00:05:41.095 16:22:20 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:41.095 16:22:20 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:41.095 16:22:20 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:41.095 16:22:20 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:41.095 00:05:41.095 real 0m0.028s 00:05:41.095 user 0m0.012s 00:05:41.095 sys 0m0.016s 00:05:41.095 16:22:20 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.095 16:22:20 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:41.095 ************************************ 00:05:41.095 END TEST accel_wrong_workload 00:05:41.095 ************************************ 00:05:41.095 Error: writing output failed: Broken pipe 00:05:41.095 16:22:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:41.095 16:22:20 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:41.095 16:22:20 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:41.095 16:22:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.095 16:22:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.413 ************************************ 00:05:41.413 START TEST accel_negative_buffers 00:05:41.413 ************************************ 00:05:41.413 16:22:20 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:41.413 16:22:20 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:41.413 16:22:20 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:41.413 16:22:20 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:41.413 16:22:20 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.413 16:22:20 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:41.413 16:22:20 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.413 16:22:20 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:41.413 16:22:20 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:41.413 16:22:20 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:41.413 16:22:20 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.413 16:22:20 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.413 16:22:20 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.413 16:22:20 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.413 16:22:20 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.413 16:22:20 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:41.413 16:22:20 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:41.413 -x option must be non-negative. 00:05:41.413 [2024-07-15 16:22:20.741840] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:41.413 accel_perf options: 00:05:41.413 [-h help message] 00:05:41.413 [-q queue depth per core] 00:05:41.413 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:41.413 [-T number of threads per core 00:05:41.413 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:41.413 [-t time in seconds] 00:05:41.413 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:41.413 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:41.413 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:41.413 [-l for compress/decompress workloads, name of uncompressed input file 00:05:41.413 [-S for crc32c workload, use this seed value (default 0) 00:05:41.413 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:41.413 [-f for fill workload, use this BYTE value (default 255) 00:05:41.413 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:41.413 [-y verify result if this switch is on] 00:05:41.413 [-a tasks to allocate per core (default: same value as -q)] 00:05:41.413 Can be used to spread operations across a wider range of memory. 00:05:41.413 16:22:20 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:41.413 16:22:20 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:41.413 16:22:20 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:41.413 16:22:20 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:41.413 00:05:41.413 real 0m0.030s 00:05:41.413 user 0m0.011s 00:05:41.413 sys 0m0.019s 00:05:41.413 16:22:20 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.413 16:22:20 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:41.413 ************************************ 00:05:41.413 END TEST accel_negative_buffers 00:05:41.413 ************************************ 00:05:41.413 Error: writing output failed: Broken pipe 00:05:41.413 16:22:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:41.413 16:22:20 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:41.413 16:22:20 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:41.413 16:22:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.413 16:22:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.413 ************************************ 00:05:41.413 START TEST accel_crc32c 00:05:41.413 ************************************ 00:05:41.413 16:22:20 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:41.413 16:22:20 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:41.413 16:22:20 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:41.413 16:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.413 16:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.413 16:22:20 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:41.413 16:22:20 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:41.413 16:22:20 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:41.413 16:22:20 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.413 16:22:20 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.413 16:22:20 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.413 16:22:20 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.413 16:22:20 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.413 16:22:20 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:41.413 16:22:20 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:41.413 [2024-07-15 16:22:20.856671] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:41.413 [2024-07-15 16:22:20.856761] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2015697 ] 00:05:41.413 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.413 [2024-07-15 16:22:20.929360] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.695 [2024-07-15 16:22:21.010582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.695 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.696 16:22:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.696 16:22:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.696 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.696 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.696 16:22:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.696 16:22:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.696 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.696 16:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.633 16:22:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.633 16:22:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.633 16:22:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.633 16:22:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.633 16:22:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.633 16:22:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.633 16:22:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.633 16:22:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.633 16:22:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.633 16:22:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.633 16:22:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.633 16:22:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.633 16:22:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.633 16:22:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.633 16:22:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.634 16:22:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.634 16:22:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.634 16:22:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.634 16:22:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.634 16:22:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.634 16:22:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.634 16:22:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.634 16:22:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.634 16:22:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.634 16:22:22 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.634 16:22:22 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:42.634 16:22:22 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.634 00:05:42.634 real 0m1.353s 00:05:42.634 user 0m1.230s 00:05:42.634 sys 0m0.140s 00:05:42.634 16:22:22 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.634 16:22:22 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:42.634 ************************************ 00:05:42.634 END TEST accel_crc32c 00:05:42.634 ************************************ 00:05:42.893 16:22:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:42.893 16:22:22 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:42.893 16:22:22 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:42.893 16:22:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.893 16:22:22 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.893 ************************************ 00:05:42.893 START TEST accel_crc32c_C2 00:05:42.893 ************************************ 00:05:42.893 16:22:22 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:42.893 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:42.893 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:42.893 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.893 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.893 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:42.893 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:42.893 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.893 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.893 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.893 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.893 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.893 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.893 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:42.893 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:42.893 [2024-07-15 16:22:22.290698] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:42.893 [2024-07-15 16:22:22.290780] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2015985 ] 00:05:42.893 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.893 [2024-07-15 16:22:22.362040] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.893 [2024-07-15 16:22:22.432714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.894 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.153 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:43.153 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.153 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.153 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.153 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.153 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.153 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.153 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.153 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.153 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.153 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.153 16:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.091 16:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.091 16:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.091 16:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.091 16:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.091 16:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.091 16:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.091 16:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.091 16:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.091 16:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.091 16:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.091 16:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.091 16:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.091 16:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.091 16:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.091 16:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.091 16:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.091 16:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.091 16:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.091 16:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.091 16:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.091 16:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.091 16:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.091 16:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.091 16:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.091 16:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.091 16:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:44.091 16:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.091 00:05:44.091 real 0m1.340s 00:05:44.091 user 0m1.222s 00:05:44.091 sys 0m0.134s 00:05:44.091 16:22:23 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.091 16:22:23 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:44.091 ************************************ 00:05:44.091 END TEST accel_crc32c_C2 00:05:44.091 ************************************ 00:05:44.091 16:22:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:44.091 16:22:23 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:44.091 16:22:23 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:44.091 16:22:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.091 16:22:23 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.351 ************************************ 00:05:44.351 START TEST accel_copy 00:05:44.351 ************************************ 00:05:44.351 16:22:23 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:44.351 [2024-07-15 16:22:23.709626] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:44.351 [2024-07-15 16:22:23.709712] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2016214 ] 00:05:44.351 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.351 [2024-07-15 16:22:23.779532] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.351 [2024-07-15 16:22:23.850570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.351 16:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.352 16:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.730 16:22:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.730 16:22:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.731 16:22:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.731 16:22:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.731 16:22:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.731 16:22:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.731 16:22:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.731 16:22:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.731 16:22:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.731 16:22:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.731 16:22:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.731 16:22:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.731 16:22:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.731 16:22:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.731 16:22:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.731 16:22:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.731 16:22:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.731 16:22:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.731 16:22:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.731 16:22:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.731 16:22:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.731 16:22:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.731 16:22:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.731 16:22:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.731 16:22:25 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.731 16:22:25 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:45.731 16:22:25 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.731 00:05:45.731 real 0m1.337s 00:05:45.731 user 0m1.217s 00:05:45.731 sys 0m0.132s 00:05:45.731 16:22:25 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.731 16:22:25 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:45.731 ************************************ 00:05:45.731 END TEST accel_copy 00:05:45.731 ************************************ 00:05:45.731 16:22:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:45.731 16:22:25 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:45.731 16:22:25 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:45.731 16:22:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.731 16:22:25 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.731 ************************************ 00:05:45.731 START TEST accel_fill 00:05:45.731 ************************************ 00:05:45.731 16:22:25 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:45.731 [2024-07-15 16:22:25.126146] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:45.731 [2024-07-15 16:22:25.126228] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2016441 ] 00:05:45.731 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.731 [2024-07-15 16:22:25.196057] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.731 [2024-07-15 16:22:25.268230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.731 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.990 16:22:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.928 16:22:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:46.928 16:22:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.928 16:22:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.928 16:22:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.928 16:22:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:46.928 16:22:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.928 16:22:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.928 16:22:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.928 16:22:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:46.928 16:22:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.928 16:22:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.928 16:22:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.928 16:22:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:46.928 16:22:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.928 16:22:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.928 16:22:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.928 16:22:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:46.928 16:22:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.928 16:22:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.928 16:22:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.928 16:22:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:46.928 16:22:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.928 16:22:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.928 16:22:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.928 16:22:26 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.928 16:22:26 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:46.928 16:22:26 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.928 00:05:46.928 real 0m1.339s 00:05:46.928 user 0m1.225s 00:05:46.928 sys 0m0.128s 00:05:46.928 16:22:26 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.928 16:22:26 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:46.928 ************************************ 00:05:46.928 END TEST accel_fill 00:05:46.928 ************************************ 00:05:46.928 16:22:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:46.928 16:22:26 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:46.928 16:22:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:46.928 16:22:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.928 16:22:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.188 ************************************ 00:05:47.188 START TEST accel_copy_crc32c 00:05:47.188 ************************************ 00:05:47.188 16:22:26 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:47.188 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:47.188 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:47.188 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.188 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.188 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:47.188 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:47.188 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:47.188 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.188 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.188 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.188 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.188 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.188 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:47.188 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:47.188 [2024-07-15 16:22:26.548253] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:47.188 [2024-07-15 16:22:26.548333] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2016661 ] 00:05:47.188 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.188 [2024-07-15 16:22:26.620501] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.189 [2024-07-15 16:22:26.691919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.189 16:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.568 16:22:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.568 16:22:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.568 16:22:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.568 16:22:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.568 16:22:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.568 16:22:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.568 16:22:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.568 16:22:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.568 16:22:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.568 16:22:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.568 16:22:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.568 16:22:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.568 16:22:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.568 16:22:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.568 16:22:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.568 16:22:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.568 16:22:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.568 16:22:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.568 16:22:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.568 16:22:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.568 16:22:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.568 16:22:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.568 16:22:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.569 16:22:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.569 16:22:27 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.569 16:22:27 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:48.569 16:22:27 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.569 00:05:48.569 real 0m1.341s 00:05:48.569 user 0m1.222s 00:05:48.569 sys 0m0.135s 00:05:48.569 16:22:27 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.569 16:22:27 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:48.569 ************************************ 00:05:48.569 END TEST accel_copy_crc32c 00:05:48.569 ************************************ 00:05:48.569 16:22:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:48.569 16:22:27 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:48.569 16:22:27 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:48.569 16:22:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.569 16:22:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.569 ************************************ 00:05:48.569 START TEST accel_copy_crc32c_C2 00:05:48.569 ************************************ 00:05:48.569 16:22:27 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:48.569 16:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:48.569 16:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:48.569 16:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.569 16:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.569 16:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:48.569 16:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:48.569 16:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.569 16:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.569 16:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.569 16:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.569 16:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.569 16:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.569 16:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:48.569 16:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:48.569 [2024-07-15 16:22:27.966794] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:48.569 [2024-07-15 16:22:27.966875] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2016888 ] 00:05:48.569 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.569 [2024-07-15 16:22:28.037454] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.569 [2024-07-15 16:22:28.108635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.569 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.829 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:48.829 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.829 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.829 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.829 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:48.829 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.829 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.829 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.829 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.829 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.829 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.829 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.829 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:48.829 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.829 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.829 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.829 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.829 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.829 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.829 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.829 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.829 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.829 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.829 16:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.767 16:22:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:49.767 16:22:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.767 16:22:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.767 16:22:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.767 16:22:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:49.767 16:22:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.767 16:22:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.767 16:22:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.767 16:22:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:49.767 16:22:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.767 16:22:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.767 16:22:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.767 16:22:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:49.767 16:22:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.767 16:22:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.767 16:22:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.767 16:22:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:49.767 16:22:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.767 16:22:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.767 16:22:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.767 16:22:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:49.767 16:22:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.767 16:22:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.767 16:22:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.767 16:22:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.767 16:22:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:49.767 16:22:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.767 00:05:49.767 real 0m1.339s 00:05:49.767 user 0m1.220s 00:05:49.767 sys 0m0.134s 00:05:49.767 16:22:29 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.767 16:22:29 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:49.767 ************************************ 00:05:49.767 END TEST accel_copy_crc32c_C2 00:05:49.767 ************************************ 00:05:49.767 16:22:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:49.767 16:22:29 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:49.767 16:22:29 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:49.767 16:22:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.767 16:22:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.027 ************************************ 00:05:50.027 START TEST accel_dualcast 00:05:50.027 ************************************ 00:05:50.027 16:22:29 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:50.027 [2024-07-15 16:22:29.388700] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:50.027 [2024-07-15 16:22:29.388791] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2017159 ] 00:05:50.027 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.027 [2024-07-15 16:22:29.459776] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.027 [2024-07-15 16:22:29.533006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.027 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.028 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.028 16:22:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.028 16:22:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.028 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.028 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.028 16:22:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:50.028 16:22:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.028 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.028 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.028 16:22:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.028 16:22:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.028 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.028 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.028 16:22:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.028 16:22:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.028 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.028 16:22:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 16:22:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.407 16:22:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.407 16:22:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 16:22:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 16:22:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.407 16:22:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.407 16:22:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 16:22:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 16:22:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.407 16:22:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.407 16:22:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 16:22:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 16:22:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.407 16:22:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.407 16:22:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 16:22:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 16:22:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.407 16:22:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.407 16:22:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 16:22:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 16:22:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.407 16:22:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.407 16:22:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 16:22:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 16:22:30 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.407 16:22:30 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:51.407 16:22:30 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.407 00:05:51.407 real 0m1.344s 00:05:51.407 user 0m1.226s 00:05:51.407 sys 0m0.132s 00:05:51.407 16:22:30 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.407 16:22:30 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:51.407 ************************************ 00:05:51.407 END TEST accel_dualcast 00:05:51.407 ************************************ 00:05:51.407 16:22:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.407 16:22:30 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:51.407 16:22:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:51.407 16:22:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.407 16:22:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.407 ************************************ 00:05:51.407 START TEST accel_compare 00:05:51.407 ************************************ 00:05:51.407 16:22:30 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:51.407 [2024-07-15 16:22:30.811086] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:51.407 [2024-07-15 16:22:30.811168] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2017451 ] 00:05:51.407 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.407 [2024-07-15 16:22:30.880323] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.407 [2024-07-15 16:22:30.950974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 16:22:30 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.408 16:22:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.408 16:22:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.408 16:22:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.408 16:22:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.408 16:22:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.408 16:22:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.408 16:22:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.408 16:22:30 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:51.408 16:22:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.408 16:22:30 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:51.408 16:22:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.667 16:22:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.667 16:22:31 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:51.667 16:22:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.667 16:22:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.667 16:22:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.667 16:22:31 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:51.667 16:22:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.667 16:22:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.667 16:22:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.667 16:22:31 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:51.667 16:22:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.667 16:22:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.667 16:22:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.667 16:22:31 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.667 16:22:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.667 16:22:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.667 16:22:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.667 16:22:31 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:51.667 16:22:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.667 16:22:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.667 16:22:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.667 16:22:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.667 16:22:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.667 16:22:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.667 16:22:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.667 16:22:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.667 16:22:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.667 16:22:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.667 16:22:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:52.606 16:22:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:52.606 16:22:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:52.606 16:22:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:52.606 16:22:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:52.606 16:22:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:52.606 16:22:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:52.606 16:22:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:52.606 16:22:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:52.606 16:22:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:52.606 16:22:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:52.606 16:22:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:52.606 16:22:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:52.606 16:22:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:52.606 16:22:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:52.606 16:22:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:52.606 16:22:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:52.606 16:22:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:52.606 16:22:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:52.606 16:22:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:52.606 16:22:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:52.606 16:22:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:52.606 16:22:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:52.606 16:22:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:52.606 16:22:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:52.606 16:22:32 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:52.606 16:22:32 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:52.606 16:22:32 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.606 00:05:52.606 real 0m1.338s 00:05:52.606 user 0m1.232s 00:05:52.606 sys 0m0.120s 00:05:52.606 16:22:32 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.606 16:22:32 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:52.606 ************************************ 00:05:52.606 END TEST accel_compare 00:05:52.606 ************************************ 00:05:52.606 16:22:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:52.606 16:22:32 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:52.606 16:22:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:52.606 16:22:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.606 16:22:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.866 ************************************ 00:05:52.866 START TEST accel_xor 00:05:52.866 ************************************ 00:05:52.866 16:22:32 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:52.866 [2024-07-15 16:22:32.230589] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:52.866 [2024-07-15 16:22:32.230671] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2017730 ] 00:05:52.866 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.866 [2024-07-15 16:22:32.300040] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.866 [2024-07-15 16:22:32.370968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.866 16:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.243 00:05:54.243 real 0m1.336s 00:05:54.243 user 0m1.214s 00:05:54.243 sys 0m0.135s 00:05:54.243 16:22:33 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.243 16:22:33 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:54.243 ************************************ 00:05:54.243 END TEST accel_xor 00:05:54.243 ************************************ 00:05:54.243 16:22:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:54.243 16:22:33 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:54.243 16:22:33 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:54.243 16:22:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.243 16:22:33 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.243 ************************************ 00:05:54.243 START TEST accel_xor 00:05:54.243 ************************************ 00:05:54.243 16:22:33 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:54.243 16:22:33 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:54.243 [2024-07-15 16:22:33.649498] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:54.244 [2024-07-15 16:22:33.649571] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2018013 ] 00:05:54.244 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.244 [2024-07-15 16:22:33.720267] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.244 [2024-07-15 16:22:33.790831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.244 16:22:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.244 16:22:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.244 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.244 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.244 16:22:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.244 16:22:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.244 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.244 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.244 16:22:33 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:54.244 16:22:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.244 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.244 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.244 16:22:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.244 16:22:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.244 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.244 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.244 16:22:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.244 16:22:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.244 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.244 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.244 16:22:33 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:54.244 16:22:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.244 16:22:33 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:54.244 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.244 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.244 16:22:33 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.503 16:22:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.439 16:22:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:55.439 16:22:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.439 16:22:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.439 16:22:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.439 16:22:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:55.439 16:22:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.439 16:22:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.439 16:22:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.439 16:22:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:55.439 16:22:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.439 16:22:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.439 16:22:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.439 16:22:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:55.439 16:22:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.439 16:22:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.439 16:22:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.439 16:22:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:55.439 16:22:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.439 16:22:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.439 16:22:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.439 16:22:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:55.439 16:22:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.439 16:22:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.439 16:22:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.439 16:22:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.439 16:22:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:55.439 16:22:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.439 00:05:55.439 real 0m1.338s 00:05:55.439 user 0m1.218s 00:05:55.439 sys 0m0.132s 00:05:55.439 16:22:34 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.439 16:22:34 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:55.439 ************************************ 00:05:55.439 END TEST accel_xor 00:05:55.439 ************************************ 00:05:55.439 16:22:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:55.439 16:22:35 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:55.439 16:22:35 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:55.439 16:22:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.439 16:22:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.699 ************************************ 00:05:55.699 START TEST accel_dif_verify 00:05:55.699 ************************************ 00:05:55.699 16:22:35 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:55.699 [2024-07-15 16:22:35.066566] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:55.699 [2024-07-15 16:22:35.066637] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2018300 ] 00:05:55.699 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.699 [2024-07-15 16:22:35.134943] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.699 [2024-07-15 16:22:35.205106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.699 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.700 16:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.074 16:22:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.074 16:22:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.074 16:22:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.074 16:22:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.074 16:22:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.074 16:22:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.074 16:22:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.074 16:22:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.074 16:22:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.074 16:22:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.074 16:22:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.074 16:22:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.074 16:22:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.074 16:22:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.074 16:22:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.074 16:22:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.074 16:22:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.074 16:22:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.074 16:22:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.074 16:22:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.074 16:22:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.074 16:22:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.074 16:22:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.074 16:22:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.074 16:22:36 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.074 16:22:36 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:57.074 16:22:36 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.074 00:05:57.074 real 0m1.334s 00:05:57.074 user 0m1.217s 00:05:57.074 sys 0m0.131s 00:05:57.074 16:22:36 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.074 16:22:36 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:57.074 ************************************ 00:05:57.074 END TEST accel_dif_verify 00:05:57.074 ************************************ 00:05:57.074 16:22:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:57.074 16:22:36 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:57.074 16:22:36 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:57.074 16:22:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.074 16:22:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.074 ************************************ 00:05:57.074 START TEST accel_dif_generate 00:05:57.074 ************************************ 00:05:57.074 16:22:36 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:57.074 [2024-07-15 16:22:36.480336] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:57.074 [2024-07-15 16:22:36.480424] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2018584 ] 00:05:57.074 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.074 [2024-07-15 16:22:36.549531] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.074 [2024-07-15 16:22:36.619784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.074 16:22:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.333 16:22:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.333 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.333 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.333 16:22:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:57.333 16:22:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.333 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.333 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.333 16:22:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:57.333 16:22:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.334 16:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:58.271 16:22:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:58.271 16:22:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:58.271 16:22:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:58.271 16:22:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:58.271 16:22:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:58.271 16:22:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:58.271 16:22:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:58.271 16:22:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:58.271 16:22:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:58.271 16:22:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:58.271 16:22:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:58.271 16:22:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:58.271 16:22:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:58.271 16:22:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:58.271 16:22:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:58.271 16:22:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:58.271 16:22:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:58.271 16:22:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:58.271 16:22:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:58.271 16:22:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:58.271 16:22:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:58.271 16:22:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:58.271 16:22:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:58.271 16:22:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:58.271 16:22:37 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.271 16:22:37 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:58.271 16:22:37 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.271 00:05:58.271 real 0m1.336s 00:05:58.271 user 0m1.215s 00:05:58.271 sys 0m0.139s 00:05:58.271 16:22:37 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.271 16:22:37 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:58.271 ************************************ 00:05:58.271 END TEST accel_dif_generate 00:05:58.271 ************************************ 00:05:58.271 16:22:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:58.271 16:22:37 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:58.271 16:22:37 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:58.271 16:22:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.271 16:22:37 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.531 ************************************ 00:05:58.531 START TEST accel_dif_generate_copy 00:05:58.531 ************************************ 00:05:58.531 16:22:37 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:58.531 16:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:58.531 16:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:58.531 16:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.531 16:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:58.531 16:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.531 16:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:58.531 16:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:58.531 16:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.531 16:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.531 16:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.531 16:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.531 16:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.531 16:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:58.531 16:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:58.531 [2024-07-15 16:22:37.886652] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:58.531 [2024-07-15 16:22:37.886738] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2018872 ] 00:05:58.531 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.531 [2024-07-15 16:22:37.955431] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.531 [2024-07-15 16:22:38.025851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.531 16:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.907 16:22:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.907 16:22:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.907 16:22:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.907 16:22:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.907 16:22:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.907 16:22:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.907 16:22:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.907 16:22:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.907 16:22:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.907 16:22:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.907 16:22:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.907 16:22:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.907 16:22:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.907 16:22:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.907 16:22:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.907 16:22:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.907 16:22:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.907 16:22:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.907 16:22:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.907 16:22:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.907 16:22:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.907 16:22:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.907 16:22:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.907 16:22:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.907 16:22:39 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.907 16:22:39 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:59.907 16:22:39 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.907 00:05:59.907 real 0m1.333s 00:05:59.907 user 0m1.212s 00:05:59.907 sys 0m0.135s 00:05:59.907 16:22:39 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.907 16:22:39 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:59.907 ************************************ 00:05:59.907 END TEST accel_dif_generate_copy 00:05:59.907 ************************************ 00:05:59.907 16:22:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:59.907 16:22:39 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:59.907 16:22:39 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:59.907 16:22:39 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:59.907 16:22:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.907 16:22:39 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.907 ************************************ 00:05:59.907 START TEST accel_comp 00:05:59.907 ************************************ 00:05:59.907 16:22:39 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:59.907 16:22:39 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:59.907 16:22:39 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:59.907 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.907 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.907 16:22:39 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:59.907 16:22:39 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:59.907 16:22:39 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:59.907 16:22:39 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.907 16:22:39 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.907 16:22:39 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.907 16:22:39 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.907 16:22:39 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.907 16:22:39 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:59.907 16:22:39 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:59.907 [2024-07-15 16:22:39.306196] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:59.907 [2024-07-15 16:22:39.306275] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2019151 ] 00:05:59.907 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.907 [2024-07-15 16:22:39.377964] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.908 [2024-07-15 16:22:39.450092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.908 16:22:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.166 16:22:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.103 16:22:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.103 16:22:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.103 16:22:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.103 16:22:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.103 16:22:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.103 16:22:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.103 16:22:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.103 16:22:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.103 16:22:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.103 16:22:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.103 16:22:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.103 16:22:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.103 16:22:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.103 16:22:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.103 16:22:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.103 16:22:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.103 16:22:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.103 16:22:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.103 16:22:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.103 16:22:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.103 16:22:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.103 16:22:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.103 16:22:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.103 16:22:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.103 16:22:40 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:01.103 16:22:40 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:01.103 16:22:40 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.103 00:06:01.103 real 0m1.347s 00:06:01.103 user 0m1.232s 00:06:01.103 sys 0m0.130s 00:06:01.103 16:22:40 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.103 16:22:40 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:01.103 ************************************ 00:06:01.103 END TEST accel_comp 00:06:01.103 ************************************ 00:06:01.103 16:22:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:01.103 16:22:40 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:01.103 16:22:40 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:01.104 16:22:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.104 16:22:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.363 ************************************ 00:06:01.363 START TEST accel_decomp 00:06:01.363 ************************************ 00:06:01.363 16:22:40 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:01.363 [2024-07-15 16:22:40.734178] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:01.363 [2024-07-15 16:22:40.734258] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2019426 ] 00:06:01.363 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.363 [2024-07-15 16:22:40.806541] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.363 [2024-07-15 16:22:40.878361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.363 16:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.737 16:22:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.737 16:22:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.737 16:22:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.737 16:22:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.737 16:22:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.737 16:22:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.737 16:22:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.737 16:22:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.737 16:22:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.737 16:22:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.737 16:22:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.737 16:22:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.737 16:22:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.737 16:22:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.737 16:22:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.737 16:22:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.737 16:22:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.737 16:22:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.737 16:22:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.737 16:22:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.737 16:22:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.737 16:22:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.737 16:22:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.737 16:22:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.737 16:22:42 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.737 16:22:42 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:02.737 16:22:42 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.737 00:06:02.737 real 0m1.345s 00:06:02.737 user 0m1.224s 00:06:02.737 sys 0m0.136s 00:06:02.737 16:22:42 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.737 16:22:42 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:02.737 ************************************ 00:06:02.737 END TEST accel_decomp 00:06:02.737 ************************************ 00:06:02.737 16:22:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:02.737 16:22:42 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:02.737 16:22:42 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:02.737 16:22:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.737 16:22:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.737 ************************************ 00:06:02.737 START TEST accel_decomp_full 00:06:02.737 ************************************ 00:06:02.737 16:22:42 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:02.737 16:22:42 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:02.737 16:22:42 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:02.737 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.737 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.737 16:22:42 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:02.737 16:22:42 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:02.737 16:22:42 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:02.737 16:22:42 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.737 16:22:42 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.737 16:22:42 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.737 16:22:42 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.737 16:22:42 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.737 16:22:42 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:02.737 16:22:42 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:02.737 [2024-07-15 16:22:42.160659] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:02.737 [2024-07-15 16:22:42.160742] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2019649 ] 00:06:02.737 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.737 [2024-07-15 16:22:42.232887] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.737 [2024-07-15 16:22:42.304093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.995 16:22:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.995 16:22:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.995 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.996 16:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.931 16:22:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:03.931 16:22:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.931 16:22:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.931 16:22:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.931 16:22:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:03.931 16:22:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.931 16:22:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.931 16:22:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.931 16:22:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:03.931 16:22:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.931 16:22:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.931 16:22:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.931 16:22:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:03.931 16:22:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.931 16:22:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.931 16:22:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.931 16:22:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:03.931 16:22:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.931 16:22:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.931 16:22:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.931 16:22:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:03.931 16:22:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.931 16:22:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.931 16:22:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.931 16:22:43 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.931 16:22:43 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:03.931 16:22:43 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.931 00:06:03.931 real 0m1.356s 00:06:03.931 user 0m1.242s 00:06:03.931 sys 0m0.128s 00:06:03.931 16:22:43 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.931 16:22:43 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:03.931 ************************************ 00:06:03.931 END TEST accel_decomp_full 00:06:03.931 ************************************ 00:06:04.189 16:22:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:04.189 16:22:43 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:04.189 16:22:43 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:04.189 16:22:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.189 16:22:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.189 ************************************ 00:06:04.189 START TEST accel_decomp_mcore 00:06:04.189 ************************************ 00:06:04.189 16:22:43 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:04.189 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:04.189 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:04.189 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.189 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.189 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:04.189 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:04.189 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:04.189 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.189 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.189 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.189 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.189 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.189 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:04.189 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:04.189 [2024-07-15 16:22:43.596550] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:04.189 [2024-07-15 16:22:43.596632] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2019885 ] 00:06:04.189 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.189 [2024-07-15 16:22:43.669338] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:04.189 [2024-07-15 16:22:43.743873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.189 [2024-07-15 16:22:43.743968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.189 [2024-07-15 16:22:43.744029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.189 [2024-07-15 16:22:43.744031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.448 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:04.449 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.449 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.449 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.449 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:04.449 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.449 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.449 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.449 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:04.449 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.449 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.449 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.449 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.449 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.449 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.449 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.449 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:04.449 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.449 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.449 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.449 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.449 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.449 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.449 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.449 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.449 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.449 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.449 16:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.384 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.384 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.384 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.384 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.384 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.384 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.384 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.384 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.384 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.384 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.384 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.384 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.384 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.384 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.384 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.384 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.384 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.384 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.384 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.384 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.384 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.384 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.384 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.384 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.385 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.385 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.385 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.385 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.385 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.385 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.385 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.385 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.385 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.385 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.385 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.385 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.385 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.385 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:05.385 16:22:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.385 00:06:05.385 real 0m1.360s 00:06:05.385 user 0m4.571s 00:06:05.385 sys 0m0.137s 00:06:05.385 16:22:44 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.385 16:22:44 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:05.385 ************************************ 00:06:05.385 END TEST accel_decomp_mcore 00:06:05.385 ************************************ 00:06:05.385 16:22:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:05.385 16:22:44 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:05.385 16:22:44 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:05.385 16:22:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.385 16:22:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.644 ************************************ 00:06:05.644 START TEST accel_decomp_full_mcore 00:06:05.644 ************************************ 00:06:05.644 16:22:45 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:05.644 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:05.644 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:05.644 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.644 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.644 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:05.644 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:05.644 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:05.644 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.644 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.644 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.644 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.644 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.644 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:05.644 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:05.644 [2024-07-15 16:22:45.037061] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:05.644 [2024-07-15 16:22:45.037151] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2020115 ] 00:06:05.644 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.644 [2024-07-15 16:22:45.107322] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:05.644 [2024-07-15 16:22:45.183139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.645 [2024-07-15 16:22:45.183233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.645 [2024-07-15 16:22:45.183299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.645 [2024-07-15 16:22:45.183297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.645 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.925 16:22:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.904 00:06:06.904 real 0m1.364s 00:06:06.904 user 0m4.588s 00:06:06.904 sys 0m0.142s 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.904 16:22:46 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:06.904 ************************************ 00:06:06.904 END TEST accel_decomp_full_mcore 00:06:06.904 ************************************ 00:06:06.904 16:22:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.904 16:22:46 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:06.904 16:22:46 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:06.904 16:22:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.904 16:22:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.904 ************************************ 00:06:06.904 START TEST accel_decomp_mthread 00:06:06.904 ************************************ 00:06:06.904 16:22:46 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:06.904 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:06.904 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:06.904 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.905 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.905 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:06.905 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:06.905 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:06.905 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.905 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.905 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.905 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.905 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.905 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:06.905 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:06.905 [2024-07-15 16:22:46.481380] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:06.905 [2024-07-15 16:22:46.481480] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2020354 ] 00:06:07.164 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.164 [2024-07-15 16:22:46.550393] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.164 [2024-07-15 16:22:46.621340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.164 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:07.165 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.165 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.165 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.165 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.165 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.165 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.165 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.165 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:07.165 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.165 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.165 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.165 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.165 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.165 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.165 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.165 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.165 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.165 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.165 16:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.541 00:06:08.541 real 0m1.341s 00:06:08.541 user 0m1.223s 00:06:08.541 sys 0m0.133s 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.541 16:22:47 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:08.541 ************************************ 00:06:08.541 END TEST accel_decomp_mthread 00:06:08.541 ************************************ 00:06:08.541 16:22:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.541 16:22:47 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:08.541 16:22:47 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:08.541 16:22:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.541 16:22:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.541 ************************************ 00:06:08.541 START TEST accel_decomp_full_mthread 00:06:08.541 ************************************ 00:06:08.541 16:22:47 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:08.541 16:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:08.541 16:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:08.541 16:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.541 16:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.542 16:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:08.542 16:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:08.542 16:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:08.542 16:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.542 16:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.542 16:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.542 16:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.542 16:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.542 16:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:08.542 16:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:08.542 [2024-07-15 16:22:47.903798] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:08.542 [2024-07-15 16:22:47.903878] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2020623 ] 00:06:08.542 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.542 [2024-07-15 16:22:47.976258] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.542 [2024-07-15 16:22:48.047684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.542 16:22:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.916 00:06:09.916 real 0m1.368s 00:06:09.916 user 0m1.252s 00:06:09.916 sys 0m0.129s 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.916 16:22:49 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:09.916 ************************************ 00:06:09.916 END TEST accel_decomp_full_mthread 00:06:09.916 ************************************ 00:06:09.917 16:22:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:09.917 16:22:49 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:09.917 16:22:49 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:09.917 16:22:49 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:09.917 16:22:49 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:09.917 16:22:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.917 16:22:49 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.917 16:22:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.917 16:22:49 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.917 16:22:49 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.917 16:22:49 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.917 16:22:49 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.917 16:22:49 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:09.917 16:22:49 accel -- accel/accel.sh@41 -- # jq -r . 00:06:09.917 ************************************ 00:06:09.917 START TEST accel_dif_functional_tests 00:06:09.917 ************************************ 00:06:09.917 16:22:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:09.917 [2024-07-15 16:22:49.352959] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:09.917 [2024-07-15 16:22:49.353054] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2020912 ] 00:06:09.917 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.917 [2024-07-15 16:22:49.422894] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.917 [2024-07-15 16:22:49.495525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.917 [2024-07-15 16:22:49.495622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.917 [2024-07-15 16:22:49.495622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.175 00:06:10.175 00:06:10.175 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.175 http://cunit.sourceforge.net/ 00:06:10.175 00:06:10.175 00:06:10.175 Suite: accel_dif 00:06:10.175 Test: verify: DIF generated, GUARD check ...passed 00:06:10.175 Test: verify: DIF generated, APPTAG check ...passed 00:06:10.175 Test: verify: DIF generated, REFTAG check ...passed 00:06:10.175 Test: verify: DIF not generated, GUARD check ...[2024-07-15 16:22:49.564244] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:10.175 passed 00:06:10.175 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 16:22:49.564300] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:10.175 passed 00:06:10.175 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 16:22:49.564327] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:10.175 passed 00:06:10.176 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:10.176 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 16:22:49.564379] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:10.176 passed 00:06:10.176 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:10.176 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:10.176 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:10.176 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 16:22:49.564481] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:10.176 passed 00:06:10.176 Test: verify copy: DIF generated, GUARD check ...passed 00:06:10.176 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:10.176 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:10.176 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 16:22:49.564601] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:10.176 passed 00:06:10.176 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 16:22:49.564631] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:10.176 passed 00:06:10.176 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 16:22:49.564657] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:10.176 passed 00:06:10.176 Test: generate copy: DIF generated, GUARD check ...passed 00:06:10.176 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:10.176 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:10.176 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:10.176 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:10.176 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:10.176 Test: generate copy: iovecs-len validate ...[2024-07-15 16:22:49.564832] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:10.176 passed 00:06:10.176 Test: generate copy: buffer alignment validate ...passed 00:06:10.176 00:06:10.176 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.176 suites 1 1 n/a 0 0 00:06:10.176 tests 26 26 26 0 0 00:06:10.176 asserts 115 115 115 0 n/a 00:06:10.176 00:06:10.176 Elapsed time = 0.002 seconds 00:06:10.176 00:06:10.176 real 0m0.398s 00:06:10.176 user 0m0.597s 00:06:10.176 sys 0m0.156s 00:06:10.176 16:22:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.176 16:22:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:10.176 ************************************ 00:06:10.176 END TEST accel_dif_functional_tests 00:06:10.176 ************************************ 00:06:10.434 16:22:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.434 00:06:10.434 real 0m31.513s 00:06:10.434 user 0m34.646s 00:06:10.434 sys 0m4.995s 00:06:10.434 16:22:49 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.434 16:22:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.434 ************************************ 00:06:10.434 END TEST accel 00:06:10.434 ************************************ 00:06:10.434 16:22:49 -- common/autotest_common.sh@1142 -- # return 0 00:06:10.435 16:22:49 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:10.435 16:22:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.435 16:22:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.435 16:22:49 -- common/autotest_common.sh@10 -- # set +x 00:06:10.435 ************************************ 00:06:10.435 START TEST accel_rpc 00:06:10.435 ************************************ 00:06:10.435 16:22:49 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:10.435 * Looking for test storage... 00:06:10.435 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:06:10.435 16:22:49 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:10.435 16:22:49 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2021200 00:06:10.435 16:22:49 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2021200 00:06:10.435 16:22:49 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:10.435 16:22:49 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2021200 ']' 00:06:10.435 16:22:49 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.435 16:22:49 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.435 16:22:49 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.435 16:22:49 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.435 16:22:49 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.435 [2024-07-15 16:22:49.989837] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:10.435 [2024-07-15 16:22:49.989926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2021200 ] 00:06:10.435 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.693 [2024-07-15 16:22:50.063114] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.693 [2024-07-15 16:22:50.138971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.259 16:22:50 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.259 16:22:50 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:11.259 16:22:50 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:11.259 16:22:50 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:11.259 16:22:50 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:11.259 16:22:50 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:11.259 16:22:50 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:11.259 16:22:50 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.259 16:22:50 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.259 16:22:50 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.259 ************************************ 00:06:11.259 START TEST accel_assign_opcode 00:06:11.259 ************************************ 00:06:11.259 16:22:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:11.259 16:22:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:11.259 16:22:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.259 16:22:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:11.259 [2024-07-15 16:22:50.845062] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:11.259 16:22:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.259 16:22:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:11.259 16:22:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.259 16:22:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:11.259 [2024-07-15 16:22:50.853067] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:11.517 16:22:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.517 16:22:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:11.517 16:22:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.517 16:22:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:11.517 16:22:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.517 16:22:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:11.517 16:22:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.517 16:22:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:11.517 16:22:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:11.517 16:22:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:11.517 16:22:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.517 software 00:06:11.517 00:06:11.517 real 0m0.235s 00:06:11.517 user 0m0.041s 00:06:11.517 sys 0m0.012s 00:06:11.517 16:22:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.517 16:22:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:11.517 ************************************ 00:06:11.517 END TEST accel_assign_opcode 00:06:11.517 ************************************ 00:06:11.776 16:22:51 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:11.776 16:22:51 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2021200 00:06:11.776 16:22:51 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2021200 ']' 00:06:11.776 16:22:51 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2021200 00:06:11.776 16:22:51 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:11.776 16:22:51 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:11.776 16:22:51 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2021200 00:06:11.776 16:22:51 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:11.776 16:22:51 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:11.776 16:22:51 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2021200' 00:06:11.776 killing process with pid 2021200 00:06:11.776 16:22:51 accel_rpc -- common/autotest_common.sh@967 -- # kill 2021200 00:06:11.776 16:22:51 accel_rpc -- common/autotest_common.sh@972 -- # wait 2021200 00:06:12.034 00:06:12.034 real 0m1.626s 00:06:12.034 user 0m1.663s 00:06:12.034 sys 0m0.479s 00:06:12.034 16:22:51 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.034 16:22:51 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.034 ************************************ 00:06:12.034 END TEST accel_rpc 00:06:12.034 ************************************ 00:06:12.034 16:22:51 -- common/autotest_common.sh@1142 -- # return 0 00:06:12.034 16:22:51 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:12.034 16:22:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.034 16:22:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.034 16:22:51 -- common/autotest_common.sh@10 -- # set +x 00:06:12.034 ************************************ 00:06:12.034 START TEST app_cmdline 00:06:12.034 ************************************ 00:06:12.034 16:22:51 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:12.293 * Looking for test storage... 00:06:12.293 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:12.293 16:22:51 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:12.293 16:22:51 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2021565 00:06:12.293 16:22:51 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:12.293 16:22:51 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2021565 00:06:12.293 16:22:51 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2021565 ']' 00:06:12.293 16:22:51 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.293 16:22:51 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.293 16:22:51 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.293 16:22:51 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.293 16:22:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:12.293 [2024-07-15 16:22:51.697699] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:12.293 [2024-07-15 16:22:51.697770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2021565 ] 00:06:12.293 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.293 [2024-07-15 16:22:51.766547] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.293 [2024-07-15 16:22:51.838149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.230 16:22:52 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.230 16:22:52 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:13.230 16:22:52 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:13.230 { 00:06:13.230 "version": "SPDK v24.09-pre git sha1 72fc6988f", 00:06:13.230 "fields": { 00:06:13.230 "major": 24, 00:06:13.230 "minor": 9, 00:06:13.230 "patch": 0, 00:06:13.230 "suffix": "-pre", 00:06:13.230 "commit": "72fc6988f" 00:06:13.230 } 00:06:13.230 } 00:06:13.230 16:22:52 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:13.230 16:22:52 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:13.230 16:22:52 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:13.230 16:22:52 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:13.230 16:22:52 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:13.230 16:22:52 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:13.230 16:22:52 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:13.230 16:22:52 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.230 16:22:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:13.230 16:22:52 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.230 16:22:52 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:13.230 16:22:52 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:13.230 16:22:52 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:13.230 16:22:52 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:13.230 16:22:52 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:13.230 16:22:52 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:13.230 16:22:52 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.230 16:22:52 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:13.230 16:22:52 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.230 16:22:52 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:13.230 16:22:52 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.230 16:22:52 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:13.230 16:22:52 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:06:13.230 16:22:52 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:13.491 request: 00:06:13.491 { 00:06:13.491 "method": "env_dpdk_get_mem_stats", 00:06:13.491 "req_id": 1 00:06:13.491 } 00:06:13.491 Got JSON-RPC error response 00:06:13.491 response: 00:06:13.491 { 00:06:13.491 "code": -32601, 00:06:13.491 "message": "Method not found" 00:06:13.491 } 00:06:13.491 16:22:52 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:13.491 16:22:52 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:13.491 16:22:52 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:13.491 16:22:52 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:13.491 16:22:52 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2021565 00:06:13.491 16:22:52 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2021565 ']' 00:06:13.491 16:22:52 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2021565 00:06:13.491 16:22:52 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:13.491 16:22:52 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:13.491 16:22:52 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2021565 00:06:13.491 16:22:52 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:13.491 16:22:52 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:13.491 16:22:52 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2021565' 00:06:13.491 killing process with pid 2021565 00:06:13.491 16:22:52 app_cmdline -- common/autotest_common.sh@967 -- # kill 2021565 00:06:13.491 16:22:52 app_cmdline -- common/autotest_common.sh@972 -- # wait 2021565 00:06:13.762 00:06:13.762 real 0m1.696s 00:06:13.762 user 0m1.981s 00:06:13.762 sys 0m0.482s 00:06:13.762 16:22:53 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.762 16:22:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:13.762 ************************************ 00:06:13.762 END TEST app_cmdline 00:06:13.763 ************************************ 00:06:13.763 16:22:53 -- common/autotest_common.sh@1142 -- # return 0 00:06:13.763 16:22:53 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:13.763 16:22:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.763 16:22:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.763 16:22:53 -- common/autotest_common.sh@10 -- # set +x 00:06:13.763 ************************************ 00:06:13.763 START TEST version 00:06:13.763 ************************************ 00:06:13.763 16:22:53 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:14.027 * Looking for test storage... 00:06:14.027 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:14.027 16:22:53 version -- app/version.sh@17 -- # get_header_version major 00:06:14.027 16:22:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:14.027 16:22:53 version -- app/version.sh@14 -- # cut -f2 00:06:14.027 16:22:53 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.027 16:22:53 version -- app/version.sh@17 -- # major=24 00:06:14.027 16:22:53 version -- app/version.sh@18 -- # get_header_version minor 00:06:14.027 16:22:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:14.027 16:22:53 version -- app/version.sh@14 -- # cut -f2 00:06:14.027 16:22:53 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.027 16:22:53 version -- app/version.sh@18 -- # minor=9 00:06:14.027 16:22:53 version -- app/version.sh@19 -- # get_header_version patch 00:06:14.027 16:22:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:14.027 16:22:53 version -- app/version.sh@14 -- # cut -f2 00:06:14.027 16:22:53 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.027 16:22:53 version -- app/version.sh@19 -- # patch=0 00:06:14.027 16:22:53 version -- app/version.sh@20 -- # get_header_version suffix 00:06:14.027 16:22:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:14.027 16:22:53 version -- app/version.sh@14 -- # cut -f2 00:06:14.027 16:22:53 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.027 16:22:53 version -- app/version.sh@20 -- # suffix=-pre 00:06:14.027 16:22:53 version -- app/version.sh@22 -- # version=24.9 00:06:14.027 16:22:53 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:14.027 16:22:53 version -- app/version.sh@28 -- # version=24.9rc0 00:06:14.027 16:22:53 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:14.027 16:22:53 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:14.027 16:22:53 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:14.027 16:22:53 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:14.027 00:06:14.027 real 0m0.178s 00:06:14.027 user 0m0.076s 00:06:14.027 sys 0m0.147s 00:06:14.027 16:22:53 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.027 16:22:53 version -- common/autotest_common.sh@10 -- # set +x 00:06:14.027 ************************************ 00:06:14.027 END TEST version 00:06:14.027 ************************************ 00:06:14.027 16:22:53 -- common/autotest_common.sh@1142 -- # return 0 00:06:14.027 16:22:53 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:14.027 16:22:53 -- spdk/autotest.sh@198 -- # uname -s 00:06:14.027 16:22:53 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:14.027 16:22:53 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:14.027 16:22:53 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:14.027 16:22:53 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:14.027 16:22:53 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:14.027 16:22:53 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:14.027 16:22:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:14.027 16:22:53 -- common/autotest_common.sh@10 -- # set +x 00:06:14.027 16:22:53 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:14.027 16:22:53 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:14.027 16:22:53 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:06:14.027 16:22:53 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:06:14.027 16:22:53 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:06:14.027 16:22:53 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:06:14.027 16:22:53 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:06:14.027 16:22:53 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:06:14.027 16:22:53 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:06:14.027 16:22:53 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:06:14.027 16:22:53 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:06:14.027 16:22:53 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:06:14.027 16:22:53 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:06:14.027 16:22:53 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:06:14.027 16:22:53 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:06:14.027 16:22:53 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:06:14.027 16:22:53 -- spdk/autotest.sh@371 -- # [[ 1 -eq 1 ]] 00:06:14.027 16:22:53 -- spdk/autotest.sh@372 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:14.027 16:22:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.027 16:22:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.028 16:22:53 -- common/autotest_common.sh@10 -- # set +x 00:06:14.285 ************************************ 00:06:14.285 START TEST llvm_fuzz 00:06:14.285 ************************************ 00:06:14.285 16:22:53 llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:14.285 * Looking for test storage... 00:06:14.285 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:06:14.285 16:22:53 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:06:14.285 16:22:53 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:06:14.285 16:22:53 llvm_fuzz -- common/autotest_common.sh@546 -- # fuzzers=() 00:06:14.285 16:22:53 llvm_fuzz -- common/autotest_common.sh@546 -- # local fuzzers 00:06:14.285 16:22:53 llvm_fuzz -- common/autotest_common.sh@548 -- # [[ -n '' ]] 00:06:14.286 16:22:53 llvm_fuzz -- common/autotest_common.sh@551 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:06:14.286 16:22:53 llvm_fuzz -- common/autotest_common.sh@552 -- # fuzzers=("${fuzzers[@]##*/}") 00:06:14.286 16:22:53 llvm_fuzz -- common/autotest_common.sh@555 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:06:14.286 16:22:53 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:14.286 16:22:53 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:06:14.286 16:22:53 llvm_fuzz -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:06:14.286 16:22:53 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:14.286 16:22:53 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:14.286 16:22:53 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:14.286 16:22:53 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:14.286 16:22:53 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:14.286 16:22:53 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:14.286 16:22:53 llvm_fuzz -- fuzz/llvm.sh@62 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:14.286 16:22:53 llvm_fuzz -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.286 16:22:53 llvm_fuzz -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.286 16:22:53 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:06:14.286 ************************************ 00:06:14.286 START TEST nvmf_llvm_fuzz 00:06:14.286 ************************************ 00:06:14.286 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:14.547 * Looking for test storage... 00:06:14.547 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:14.547 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:06:14.547 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:06:14.547 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:14.547 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:06:14.547 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:14.547 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:14.547 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:14.547 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:06:14.547 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:14.547 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:06:14.547 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:14.547 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:14.547 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:14.548 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:14.549 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:14.549 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:14.549 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:14.549 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:14.549 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:14.549 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:14.549 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:14.549 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:14.549 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:14.549 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:14.549 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:14.549 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:06:14.549 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:14.549 #define SPDK_CONFIG_H 00:06:14.549 #define SPDK_CONFIG_APPS 1 00:06:14.549 #define SPDK_CONFIG_ARCH native 00:06:14.549 #undef SPDK_CONFIG_ASAN 00:06:14.549 #undef SPDK_CONFIG_AVAHI 00:06:14.549 #undef SPDK_CONFIG_CET 00:06:14.549 #define SPDK_CONFIG_COVERAGE 1 00:06:14.549 #define SPDK_CONFIG_CROSS_PREFIX 00:06:14.549 #undef SPDK_CONFIG_CRYPTO 00:06:14.549 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:14.549 #undef SPDK_CONFIG_CUSTOMOCF 00:06:14.549 #undef SPDK_CONFIG_DAOS 00:06:14.549 #define SPDK_CONFIG_DAOS_DIR 00:06:14.549 #define SPDK_CONFIG_DEBUG 1 00:06:14.549 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:14.549 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:14.549 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:14.549 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:14.549 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:14.549 #undef SPDK_CONFIG_DPDK_UADK 00:06:14.549 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:14.549 #define SPDK_CONFIG_EXAMPLES 1 00:06:14.549 #undef SPDK_CONFIG_FC 00:06:14.549 #define SPDK_CONFIG_FC_PATH 00:06:14.549 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:14.549 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:14.549 #undef SPDK_CONFIG_FUSE 00:06:14.549 #define SPDK_CONFIG_FUZZER 1 00:06:14.549 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:06:14.549 #undef SPDK_CONFIG_GOLANG 00:06:14.549 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:14.549 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:14.549 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:14.549 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:14.549 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:14.549 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:14.549 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:14.549 #define SPDK_CONFIG_IDXD 1 00:06:14.549 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:14.549 #undef SPDK_CONFIG_IPSEC_MB 00:06:14.549 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:14.549 #define SPDK_CONFIG_ISAL 1 00:06:14.549 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:14.549 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:14.549 #define SPDK_CONFIG_LIBDIR 00:06:14.549 #undef SPDK_CONFIG_LTO 00:06:14.549 #define SPDK_CONFIG_MAX_LCORES 128 00:06:14.549 #define SPDK_CONFIG_NVME_CUSE 1 00:06:14.549 #undef SPDK_CONFIG_OCF 00:06:14.549 #define SPDK_CONFIG_OCF_PATH 00:06:14.549 #define SPDK_CONFIG_OPENSSL_PATH 00:06:14.549 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:14.549 #define SPDK_CONFIG_PGO_DIR 00:06:14.549 #undef SPDK_CONFIG_PGO_USE 00:06:14.549 #define SPDK_CONFIG_PREFIX /usr/local 00:06:14.549 #undef SPDK_CONFIG_RAID5F 00:06:14.549 #undef SPDK_CONFIG_RBD 00:06:14.549 #define SPDK_CONFIG_RDMA 1 00:06:14.549 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:14.549 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:14.549 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:14.549 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:14.549 #undef SPDK_CONFIG_SHARED 00:06:14.549 #undef SPDK_CONFIG_SMA 00:06:14.549 #define SPDK_CONFIG_TESTS 1 00:06:14.549 #undef SPDK_CONFIG_TSAN 00:06:14.549 #define SPDK_CONFIG_UBLK 1 00:06:14.549 #define SPDK_CONFIG_UBSAN 1 00:06:14.549 #undef SPDK_CONFIG_UNIT_TESTS 00:06:14.549 #undef SPDK_CONFIG_URING 00:06:14.549 #define SPDK_CONFIG_URING_PATH 00:06:14.549 #undef SPDK_CONFIG_URING_ZNS 00:06:14.549 #undef SPDK_CONFIG_USDT 00:06:14.549 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:14.549 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:14.549 #define SPDK_CONFIG_VFIO_USER 1 00:06:14.549 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:14.549 #define SPDK_CONFIG_VHOST 1 00:06:14.549 #define SPDK_CONFIG_VIRTIO 1 00:06:14.549 #undef SPDK_CONFIG_VTUNE 00:06:14.549 #define SPDK_CONFIG_VTUNE_DIR 00:06:14.549 #define SPDK_CONFIG_WERROR 1 00:06:14.549 #define SPDK_CONFIG_WPDK_DIR 00:06:14.549 #undef SPDK_CONFIG_XNVME 00:06:14.549 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:14.549 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:14.549 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:14.549 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:14.549 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:14.549 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:14.549 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.549 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:14.550 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 1 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 0 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : true 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : 0 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # : 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 0 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:14.551 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:14.552 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:14.552 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:14.552 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:14.552 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:14.552 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:14.552 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:14.552 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:14.552 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:14.552 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:14.552 16:22:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # cat 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@318 -- # [[ -z 2021997 ]] 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@318 -- # kill -0 2021997 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.JZ0CTc 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.JZ0CTc/tests/nvmf /tmp/spdk.JZ0CTc 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@327 -- # df -T 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:14.552 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=954408960 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4330020864 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=54032916480 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742317568 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=7709401088 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30866448384 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871158784 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=12342484992 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348465152 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=5980160 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30870278144 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871158784 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=880640 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=6174224384 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174228480 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:14.553 * Looking for test storage... 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # target_space=54032916480 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@381 -- # new_size=9923993600 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:14.553 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # return 0 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1687 -- # true 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:06:14.553 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:06:14.554 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:06:14.554 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:06:14.554 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:06:14.554 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:06:14.554 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:06:14.554 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:14.554 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:06:14.554 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:06:14.554 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:14.554 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:14.554 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:14.554 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:06:14.554 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:14.554 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:14.554 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:06:14.554 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:06:14.554 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:14.554 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:06:14.554 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:14.554 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:14.554 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:14.554 16:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:06:14.554 [2024-07-15 16:22:54.125247] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:14.554 [2024-07-15 16:22:54.125314] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2022048 ] 00:06:14.812 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.812 [2024-07-15 16:22:54.376816] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.071 [2024-07-15 16:22:54.463529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.071 [2024-07-15 16:22:54.522619] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.071 [2024-07-15 16:22:54.538919] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:06:15.071 INFO: Running with entropic power schedule (0xFF, 100). 00:06:15.071 INFO: Seed: 4126188483 00:06:15.071 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:15.071 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:15.071 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:15.071 INFO: A corpus is not provided, starting from an empty corpus 00:06:15.071 #2 INITED exec/s: 0 rss: 64Mb 00:06:15.071 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:15.071 This may also happen if the target rejected all inputs we tried so far 00:06:15.071 [2024-07-15 16:22:54.605721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.071 [2024-07-15 16:22:54.605761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.330 NEW_FUNC[1/695]: 0x483e80 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:06:15.330 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:15.330 #21 NEW cov: 11851 ft: 11849 corp: 2/71b lim: 320 exec/s: 0 rss: 70Mb L: 70/70 MS: 4 InsertRepeatedBytes-InsertByte-EraseBytes-InsertRepeatedBytes- 00:06:15.589 [2024-07-15 16:22:54.935824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.589 [2024-07-15 16:22:54.935870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.589 #22 NEW cov: 11981 ft: 12595 corp: 3/141b lim: 320 exec/s: 0 rss: 70Mb L: 70/70 MS: 1 ShuffleBytes- 00:06:15.589 [2024-07-15 16:22:54.995736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.589 [2024-07-15 16:22:54.995763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.589 #28 NEW cov: 11987 ft: 12973 corp: 4/220b lim: 320 exec/s: 0 rss: 70Mb L: 79/79 MS: 1 CrossOver- 00:06:15.589 [2024-07-15 16:22:55.035830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e000000000000 00:06:15.589 [2024-07-15 16:22:55.035859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.589 #29 NEW cov: 12072 ft: 13194 corp: 5/290b lim: 320 exec/s: 0 rss: 70Mb L: 70/79 MS: 1 ChangeByte- 00:06:15.589 [2024-07-15 16:22:55.085979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e000000000000 00:06:15.589 [2024-07-15 16:22:55.086006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.589 #30 NEW cov: 12072 ft: 13298 corp: 6/360b lim: 320 exec/s: 0 rss: 71Mb L: 70/79 MS: 1 ChangeBinInt- 00:06:15.589 [2024-07-15 16:22:55.136185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.589 [2024-07-15 16:22:55.136212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.589 #31 NEW cov: 12072 ft: 13410 corp: 7/431b lim: 320 exec/s: 0 rss: 71Mb L: 71/79 MS: 1 InsertByte- 00:06:15.589 [2024-07-15 16:22:55.176292] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.589 [2024-07-15 16:22:55.176317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.848 #32 NEW cov: 12089 ft: 13507 corp: 8/498b lim: 320 exec/s: 0 rss: 71Mb L: 67/79 MS: 1 CrossOver- 00:06:15.849 [2024-07-15 16:22:55.216387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:f6f6f600 cdw11:00003300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.849 [2024-07-15 16:22:55.216412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.849 #33 NEW cov: 12089 ft: 13549 corp: 9/597b lim: 320 exec/s: 0 rss: 71Mb L: 99/99 MS: 1 CopyPart- 00:06:15.849 [2024-07-15 16:22:55.266560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e000000000000 00:06:15.849 [2024-07-15 16:22:55.266586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.849 #34 NEW cov: 12089 ft: 13607 corp: 10/704b lim: 320 exec/s: 0 rss: 71Mb L: 107/107 MS: 1 InsertRepeatedBytes- 00:06:15.849 [2024-07-15 16:22:55.306767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:f6f6f600 cdw11:00003300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.849 [2024-07-15 16:22:55.306794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.849 #35 NEW cov: 12089 ft: 13703 corp: 11/803b lim: 320 exec/s: 0 rss: 71Mb L: 99/107 MS: 1 ChangeBinInt- 00:06:15.849 [2024-07-15 16:22:55.356840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e002000000000 00:06:15.849 [2024-07-15 16:22:55.356867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.849 #36 NEW cov: 12089 ft: 13734 corp: 12/910b lim: 320 exec/s: 0 rss: 71Mb L: 107/107 MS: 1 ChangeBit- 00:06:15.849 [2024-07-15 16:22:55.407028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e002000000000 00:06:15.849 [2024-07-15 16:22:55.407056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.849 #37 NEW cov: 12089 ft: 13745 corp: 13/1017b lim: 320 exec/s: 0 rss: 71Mb L: 107/107 MS: 1 ChangeBit- 00:06:16.108 [2024-07-15 16:22:55.457250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e000000000000 00:06:16.108 [2024-07-15 16:22:55.457278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.108 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:16.108 #38 NEW cov: 12112 ft: 13792 corp: 14/1124b lim: 320 exec/s: 0 rss: 71Mb L: 107/107 MS: 1 ShuffleBytes- 00:06:16.108 [2024-07-15 16:22:55.497430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.108 [2024-07-15 16:22:55.497459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.108 [2024-07-15 16:22:55.497600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:16.108 [2024-07-15 16:22:55.497620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.108 NEW_FUNC[1/1]: 0x138e890 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2047 00:06:16.108 #39 NEW cov: 12143 ft: 14016 corp: 15/1262b lim: 320 exec/s: 0 rss: 71Mb L: 138/138 MS: 1 InsertRepeatedBytes- 00:06:16.108 [2024-07-15 16:22:55.537365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.108 [2024-07-15 16:22:55.537392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.108 #40 NEW cov: 12143 ft: 14057 corp: 16/1346b lim: 320 exec/s: 0 rss: 71Mb L: 84/138 MS: 1 InsertRepeatedBytes- 00:06:16.108 [2024-07-15 16:22:55.587527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.108 [2024-07-15 16:22:55.587554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.108 #41 NEW cov: 12143 ft: 14086 corp: 17/1417b lim: 320 exec/s: 41 rss: 71Mb L: 71/138 MS: 1 ShuffleBytes- 00:06:16.108 [2024-07-15 16:22:55.627640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e000000000000 00:06:16.108 [2024-07-15 16:22:55.627666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.108 #42 NEW cov: 12143 ft: 14126 corp: 18/1487b lim: 320 exec/s: 42 rss: 71Mb L: 70/138 MS: 1 ShuffleBytes- 00:06:16.108 [2024-07-15 16:22:55.667808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.108 [2024-07-15 16:22:55.667836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.367 #43 NEW cov: 12143 ft: 14206 corp: 19/1559b lim: 320 exec/s: 43 rss: 72Mb L: 72/138 MS: 1 InsertByte- 00:06:16.367 [2024-07-15 16:22:55.717918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.367 [2024-07-15 16:22:55.717948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.367 #44 NEW cov: 12143 ft: 14241 corp: 20/1646b lim: 320 exec/s: 44 rss: 72Mb L: 87/138 MS: 1 EraseBytes- 00:06:16.367 [2024-07-15 16:22:55.768293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.367 [2024-07-15 16:22:55.768320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.367 [2024-07-15 16:22:55.768440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:16.367 [2024-07-15 16:22:55.768462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.367 #45 NEW cov: 12145 ft: 14302 corp: 21/1805b lim: 320 exec/s: 45 rss: 72Mb L: 159/159 MS: 1 InsertRepeatedBytes- 00:06:16.367 [2024-07-15 16:22:55.808201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.367 [2024-07-15 16:22:55.808229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.367 #46 NEW cov: 12145 ft: 14316 corp: 22/1889b lim: 320 exec/s: 46 rss: 72Mb L: 84/159 MS: 1 ChangeBinInt- 00:06:16.367 [2024-07-15 16:22:55.858597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.367 [2024-07-15 16:22:55.858626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.367 [2024-07-15 16:22:55.858737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:16.367 [2024-07-15 16:22:55.858754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.367 #47 NEW cov: 12145 ft: 14326 corp: 23/2044b lim: 320 exec/s: 47 rss: 72Mb L: 155/159 MS: 1 InsertRepeatedBytes- 00:06:16.367 [2024-07-15 16:22:55.898374] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.367 [2024-07-15 16:22:55.898401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.367 #48 NEW cov: 12145 ft: 14356 corp: 24/2111b lim: 320 exec/s: 48 rss: 72Mb L: 67/159 MS: 1 ChangeBit- 00:06:16.367 [2024-07-15 16:22:55.948625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.367 [2024-07-15 16:22:55.948653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.636 #49 NEW cov: 12145 ft: 14374 corp: 25/2198b lim: 320 exec/s: 49 rss: 72Mb L: 87/159 MS: 1 ChangeBit- 00:06:16.636 [2024-07-15 16:22:55.998824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.636 [2024-07-15 16:22:55.998851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.636 #50 NEW cov: 12145 ft: 14410 corp: 26/2268b lim: 320 exec/s: 50 rss: 72Mb L: 70/159 MS: 1 ShuffleBytes- 00:06:16.636 [2024-07-15 16:22:56.038910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e002000000000 00:06:16.636 [2024-07-15 16:22:56.038937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.636 #51 NEW cov: 12145 ft: 14417 corp: 27/2375b lim: 320 exec/s: 51 rss: 72Mb L: 107/159 MS: 1 ChangeBinInt- 00:06:16.636 [2024-07-15 16:22:56.089049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x570000 00:06:16.636 [2024-07-15 16:22:56.089076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.636 #52 NEW cov: 12145 ft: 14459 corp: 28/2462b lim: 320 exec/s: 52 rss: 72Mb L: 87/159 MS: 1 ChangeBinInt- 00:06:16.636 [2024-07-15 16:22:56.129396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:f6f6f600 cdw11:f6f6f6f6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.636 [2024-07-15 16:22:56.129425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.636 [2024-07-15 16:22:56.129548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:200000 cdw10:ffff0000 cdw11:0000ffff 00:06:16.636 [2024-07-15 16:22:56.129564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.636 #53 NEW cov: 12145 ft: 14464 corp: 29/2638b lim: 320 exec/s: 53 rss: 73Mb L: 176/176 MS: 1 CrossOver- 00:06:16.636 [2024-07-15 16:22:56.179133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xf600000000000000 00:06:16.636 [2024-07-15 16:22:56.179160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.636 [2024-07-15 16:22:56.179272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:16.636 [2024-07-15 16:22:56.179291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.636 #54 NEW cov: 12145 ft: 14498 corp: 30/2772b lim: 320 exec/s: 54 rss: 73Mb L: 134/176 MS: 1 CrossOver- 00:06:16.636 [2024-07-15 16:22:56.219377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:f6f6f600 cdw11:f6f6f6f6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.636 [2024-07-15 16:22:56.219405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.636 [2024-07-15 16:22:56.219537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:200000 cdw10:ffff0000 cdw11:0000ffff 00:06:16.636 [2024-07-15 16:22:56.219554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.895 #55 NEW cov: 12145 ft: 14506 corp: 31/2948b lim: 320 exec/s: 55 rss: 73Mb L: 176/176 MS: 1 ShuffleBytes- 00:06:16.895 [2024-07-15 16:22:56.269589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.895 [2024-07-15 16:22:56.269614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.895 #56 NEW cov: 12145 ft: 14524 corp: 32/3067b lim: 320 exec/s: 56 rss: 73Mb L: 119/176 MS: 1 CopyPart- 00:06:16.895 [2024-07-15 16:22:56.309616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.895 [2024-07-15 16:22:56.309643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.895 #57 NEW cov: 12145 ft: 14536 corp: 33/3154b lim: 320 exec/s: 57 rss: 73Mb L: 87/176 MS: 1 CrossOver- 00:06:16.895 [2024-07-15 16:22:56.359796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:f6f6f600 cdw11:00003300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.895 [2024-07-15 16:22:56.359822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.895 #58 NEW cov: 12145 ft: 14553 corp: 34/3253b lim: 320 exec/s: 58 rss: 73Mb L: 99/176 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\004"- 00:06:16.895 [2024-07-15 16:22:56.400048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.895 [2024-07-15 16:22:56.400076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.895 [2024-07-15 16:22:56.400220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:16.895 [2024-07-15 16:22:56.400238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.895 #59 NEW cov: 12145 ft: 14566 corp: 35/3391b lim: 320 exec/s: 59 rss: 73Mb L: 138/176 MS: 1 ShuffleBytes- 00:06:16.895 [2024-07-15 16:22:56.440051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6fdf6 cdw10:f6f60000 cdw11:003300f6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.895 [2024-07-15 16:22:56.440078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.895 #60 NEW cov: 12145 ft: 14573 corp: 36/3491b lim: 320 exec/s: 60 rss: 73Mb L: 100/176 MS: 1 InsertByte- 00:06:16.895 [2024-07-15 16:22:56.480125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e000000000000 00:06:16.895 [2024-07-15 16:22:56.480152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.155 #61 NEW cov: 12145 ft: 14644 corp: 37/3561b lim: 320 exec/s: 61 rss: 73Mb L: 70/176 MS: 1 ChangeBit- 00:06:17.155 [2024-07-15 16:22:56.530408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.155 [2024-07-15 16:22:56.530435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.155 #62 NEW cov: 12145 ft: 14660 corp: 38/3648b lim: 320 exec/s: 62 rss: 73Mb L: 87/176 MS: 1 ShuffleBytes- 00:06:17.155 [2024-07-15 16:22:56.570546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f6) qid:0 cid:4 nsid:f6f6f6 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.155 [2024-07-15 16:22:56.570572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.155 [2024-07-15 16:22:56.570675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:7a0af600 cdw10:00000000 cdw11:00000000 00:06:17.155 [2024-07-15 16:22:56.570691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.155 #63 NEW cov: 12145 ft: 14679 corp: 39/3792b lim: 320 exec/s: 31 rss: 73Mb L: 144/176 MS: 1 InsertRepeatedBytes- 00:06:17.155 #63 DONE cov: 12145 ft: 14679 corp: 39/3792b lim: 320 exec/s: 31 rss: 73Mb 00:06:17.155 ###### Recommended dictionary. ###### 00:06:17.155 "\000\000\000\000\000\000\000\004" # Uses: 0 00:06:17.155 ###### End of recommended dictionary. ###### 00:06:17.155 Done 63 runs in 2 second(s) 00:06:17.155 16:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:06:17.155 16:22:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:17.155 16:22:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:17.155 16:22:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:06:17.155 16:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:06:17.155 16:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:17.155 16:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:17.155 16:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:17.155 16:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:06:17.155 16:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:17.155 16:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:17.155 16:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:06:17.155 16:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:06:17.155 16:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:17.155 16:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:06:17.155 16:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:17.413 16:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:17.413 16:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:17.413 16:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:06:17.413 [2024-07-15 16:22:56.778370] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:17.413 [2024-07-15 16:22:56.778437] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2022576 ] 00:06:17.413 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.413 [2024-07-15 16:22:56.954996] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.672 [2024-07-15 16:22:57.021854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.672 [2024-07-15 16:22:57.080730] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.672 [2024-07-15 16:22:57.097037] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:06:17.672 INFO: Running with entropic power schedule (0xFF, 100). 00:06:17.672 INFO: Seed: 2386401414 00:06:17.672 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:17.672 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:17.672 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:17.672 INFO: A corpus is not provided, starting from an empty corpus 00:06:17.672 #2 INITED exec/s: 0 rss: 64Mb 00:06:17.672 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:17.672 This may also happen if the target rejected all inputs we tried so far 00:06:17.672 [2024-07-15 16:22:57.144629] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10460) > buf size (4096) 00:06:17.672 [2024-07-15 16:22:57.144862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a360071 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.672 [2024-07-15 16:22:57.144892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.931 NEW_FUNC[1/696]: 0x484780 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:06:17.931 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:17.932 #14 NEW cov: 11946 ft: 11947 corp: 2/11b lim: 30 exec/s: 0 rss: 70Mb L: 10/10 MS: 2 InsertByte-CMP- DE: "q\000\000\000\000\000\000\000"- 00:06:17.932 [2024-07-15 16:22:57.465529] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10460) > buf size (4096) 00:06:17.932 [2024-07-15 16:22:57.465773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a360071 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.932 [2024-07-15 16:22:57.465804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.932 #15 NEW cov: 12076 ft: 12396 corp: 3/21b lim: 30 exec/s: 0 rss: 70Mb L: 10/10 MS: 1 ChangeBinInt- 00:06:17.932 [2024-07-15 16:22:57.515525] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:17.932 [2024-07-15 16:22:57.515754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.932 [2024-07-15 16:22:57.515780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.190 #17 NEW cov: 12088 ft: 12829 corp: 4/30b lim: 30 exec/s: 0 rss: 70Mb L: 9/10 MS: 2 ShuffleBytes-CMP- DE: "\377\377\377\377\377\377\377\377"- 00:06:18.190 [2024-07-15 16:22:57.555743] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:18.190 [2024-07-15 16:22:57.555882] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (786432) > buf size (4096) 00:06:18.190 [2024-07-15 16:22:57.555993] ctrlr.c:2678:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (8) > len (4) 00:06:18.190 [2024-07-15 16:22:57.556228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.190 [2024-07-15 16:22:57.556255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.190 [2024-07-15 16:22:57.556311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.190 [2024-07-15 16:22:57.556325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.190 [2024-07-15 16:22:57.556384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.190 [2024-07-15 16:22:57.556397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.190 #18 NEW cov: 12186 ft: 13565 corp: 5/48b lim: 30 exec/s: 0 rss: 70Mb L: 18/18 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:06:18.190 [2024-07-15 16:22:57.605810] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (115716) > buf size (4096) 00:06:18.190 [2024-07-15 16:22:57.606037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:71000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.190 [2024-07-15 16:22:57.606065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.190 #21 NEW cov: 12186 ft: 13652 corp: 6/59b lim: 30 exec/s: 0 rss: 71Mb L: 11/18 MS: 3 InsertByte-InsertByte-PersAutoDict- DE: "q\000\000\000\000\000\000\000"- 00:06:18.190 [2024-07-15 16:22:57.645921] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2326 00:06:18.190 [2024-07-15 16:22:57.646140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:71000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.191 [2024-07-15 16:22:57.646166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.191 #22 NEW cov: 12186 ft: 13786 corp: 7/65b lim: 30 exec/s: 0 rss: 71Mb L: 6/18 MS: 1 EraseBytes- 00:06:18.191 [2024-07-15 16:22:57.696102] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10460) > buf size (4096) 00:06:18.191 [2024-07-15 16:22:57.696332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a360071 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.191 [2024-07-15 16:22:57.696360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.191 #23 NEW cov: 12186 ft: 13855 corp: 8/75b lim: 30 exec/s: 0 rss: 71Mb L: 10/18 MS: 1 ChangeByte- 00:06:18.191 [2024-07-15 16:22:57.736163] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (115716) > buf size (4096) 00:06:18.191 [2024-07-15 16:22:57.736506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:71000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.191 [2024-07-15 16:22:57.736532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.191 [2024-07-15 16:22:57.736590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.191 [2024-07-15 16:22:57.736604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.191 #24 NEW cov: 12196 ft: 14197 corp: 9/91b lim: 30 exec/s: 0 rss: 71Mb L: 16/18 MS: 1 InsertRepeatedBytes- 00:06:18.449 [2024-07-15 16:22:57.786397] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (115716) > buf size (4096) 00:06:18.449 [2024-07-15 16:22:57.786623] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100007575 00:06:18.449 [2024-07-15 16:22:57.786841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:71000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.449 [2024-07-15 16:22:57.786865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.449 [2024-07-15 16:22:57.786922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.449 [2024-07-15 16:22:57.786936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.449 [2024-07-15 16:22:57.786991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:75758175 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.449 [2024-07-15 16:22:57.787005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.449 #25 NEW cov: 12196 ft: 14230 corp: 10/114b lim: 30 exec/s: 0 rss: 71Mb L: 23/23 MS: 1 InsertRepeatedBytes- 00:06:18.449 [2024-07-15 16:22:57.836450] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff0a 00:06:18.450 [2024-07-15 16:22:57.836672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.450 [2024-07-15 16:22:57.836698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.450 #26 NEW cov: 12196 ft: 14308 corp: 11/120b lim: 30 exec/s: 0 rss: 71Mb L: 6/23 MS: 1 InsertRepeatedBytes- 00:06:18.450 [2024-07-15 16:22:57.876617] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:18.450 [2024-07-15 16:22:57.876839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.450 [2024-07-15 16:22:57.876864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.450 #27 NEW cov: 12196 ft: 14421 corp: 12/130b lim: 30 exec/s: 0 rss: 71Mb L: 10/23 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:06:18.450 [2024-07-15 16:22:57.916690] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000026 00:06:18.450 [2024-07-15 16:22:57.916919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:71008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.450 [2024-07-15 16:22:57.916945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.450 #28 NEW cov: 12196 ft: 14475 corp: 13/136b lim: 30 exec/s: 0 rss: 71Mb L: 6/23 MS: 1 ShuffleBytes- 00:06:18.450 [2024-07-15 16:22:57.956838] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (902148) > buf size (4096) 00:06:18.450 [2024-07-15 16:22:57.957059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:71008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.450 [2024-07-15 16:22:57.957085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.450 #29 NEW cov: 12196 ft: 14578 corp: 14/143b lim: 30 exec/s: 0 rss: 71Mb L: 7/23 MS: 1 InsertByte- 00:06:18.450 [2024-07-15 16:22:58.006972] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ff0a 00:06:18.450 [2024-07-15 16:22:58.007211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.450 [2024-07-15 16:22:58.007236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.450 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:18.450 #30 NEW cov: 12219 ft: 14632 corp: 15/149b lim: 30 exec/s: 0 rss: 71Mb L: 6/23 MS: 1 ChangeBit- 00:06:18.707 [2024-07-15 16:22:58.057140] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10460) > buf size (4096) 00:06:18.707 [2024-07-15 16:22:58.057359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a3600e4 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.707 [2024-07-15 16:22:58.057385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.707 #31 NEW cov: 12219 ft: 14671 corp: 16/159b lim: 30 exec/s: 0 rss: 71Mb L: 10/23 MS: 1 ChangeByte- 00:06:18.707 [2024-07-15 16:22:58.097263] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10460) > buf size (4096) 00:06:18.707 [2024-07-15 16:22:58.097590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a360071 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.707 [2024-07-15 16:22:58.097615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.707 [2024-07-15 16:22:58.097673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000071 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.707 [2024-07-15 16:22:58.097687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.707 #32 NEW cov: 12219 ft: 14687 corp: 17/175b lim: 30 exec/s: 0 rss: 71Mb L: 16/23 MS: 1 CopyPart- 00:06:18.707 [2024-07-15 16:22:58.137411] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10460) > buf size (4096) 00:06:18.707 [2024-07-15 16:22:58.137538] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa36 00:06:18.707 [2024-07-15 16:22:58.137647] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (233476) > buf size (4096) 00:06:18.707 [2024-07-15 16:22:58.137862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a360071 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.707 [2024-07-15 16:22:58.137887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.707 [2024-07-15 16:22:58.137944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.707 [2024-07-15 16:22:58.137957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.707 [2024-07-15 16:22:58.138014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e4000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.707 [2024-07-15 16:22:58.138031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.707 #33 NEW cov: 12219 ft: 14711 corp: 18/195b lim: 30 exec/s: 33 rss: 71Mb L: 20/23 MS: 1 CrossOver- 00:06:18.707 [2024-07-15 16:22:58.177516] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (115716) > buf size (4096) 00:06:18.707 [2024-07-15 16:22:58.177846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:71000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.707 [2024-07-15 16:22:58.177872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.707 [2024-07-15 16:22:58.177930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.707 [2024-07-15 16:22:58.177944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.707 #34 NEW cov: 12219 ft: 14732 corp: 19/211b lim: 30 exec/s: 34 rss: 71Mb L: 16/23 MS: 1 CopyPart- 00:06:18.708 [2024-07-15 16:22:58.217609] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (902148) > buf size (4096) 00:06:18.708 [2024-07-15 16:22:58.217825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:71008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.708 [2024-07-15 16:22:58.217851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.708 #35 NEW cov: 12219 ft: 14824 corp: 20/218b lim: 30 exec/s: 35 rss: 71Mb L: 7/23 MS: 1 ChangeByte- 00:06:18.708 [2024-07-15 16:22:58.267723] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10460) > buf size (4096) 00:06:18.708 [2024-07-15 16:22:58.267967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a360071 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.708 [2024-07-15 16:22:58.267991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.708 #36 NEW cov: 12219 ft: 14843 corp: 21/224b lim: 30 exec/s: 36 rss: 71Mb L: 6/23 MS: 1 EraseBytes- 00:06:18.966 [2024-07-15 16:22:58.307853] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xfa 00:06:18.966 [2024-07-15 16:22:58.308089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a360071 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.966 [2024-07-15 16:22:58.308114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.966 #37 NEW cov: 12219 ft: 14862 corp: 22/234b lim: 30 exec/s: 37 rss: 71Mb L: 10/23 MS: 1 ChangeBinInt- 00:06:18.966 [2024-07-15 16:22:58.347928] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:18.966 [2024-07-15 16:22:58.348150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a3683ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.966 [2024-07-15 16:22:58.348175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.966 #38 NEW cov: 12219 ft: 14878 corp: 23/244b lim: 30 exec/s: 38 rss: 71Mb L: 10/23 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:06:18.966 [2024-07-15 16:22:58.388106] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10460) > buf size (4096) 00:06:18.966 [2024-07-15 16:22:58.388430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a360071 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.966 [2024-07-15 16:22:58.388458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.966 [2024-07-15 16:22:58.388516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.966 [2024-07-15 16:22:58.388532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.966 #39 NEW cov: 12219 ft: 14883 corp: 24/260b lim: 30 exec/s: 39 rss: 71Mb L: 16/23 MS: 1 CrossOver- 00:06:18.966 [2024-07-15 16:22:58.428285] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:18.966 [2024-07-15 16:22:58.428411] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000f6f6 00:06:18.966 [2024-07-15 16:22:58.428531] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ff36 00:06:18.966 [2024-07-15 16:22:58.428643] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (115716) > buf size (4096) 00:06:18.966 [2024-07-15 16:22:58.428872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.966 [2024-07-15 16:22:58.428897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.966 [2024-07-15 16:22:58.428955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff02f6 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.966 [2024-07-15 16:22:58.428969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.966 [2024-07-15 16:22:58.429027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:f6f602f6 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.966 [2024-07-15 16:22:58.429040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.966 [2024-07-15 16:22:58.429097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:71000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.966 [2024-07-15 16:22:58.429111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:18.966 #40 NEW cov: 12219 ft: 15403 corp: 25/286b lim: 30 exec/s: 40 rss: 71Mb L: 26/26 MS: 1 InsertRepeatedBytes- 00:06:18.966 [2024-07-15 16:22:58.478360] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:18.966 [2024-07-15 16:22:58.478503] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000023 00:06:18.966 [2024-07-15 16:22:58.478721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:710083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.966 [2024-07-15 16:22:58.478746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.966 [2024-07-15 16:22:58.478806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.966 [2024-07-15 16:22:58.478820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.966 #41 NEW cov: 12219 ft: 15419 corp: 26/300b lim: 30 exec/s: 41 rss: 72Mb L: 14/26 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:06:18.966 [2024-07-15 16:22:58.518483] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (115716) > buf size (4096) 00:06:18.966 [2024-07-15 16:22:58.518608] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (786436) > buf size (4096) 00:06:18.966 [2024-07-15 16:22:58.518739] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (38916) > buf size (4096) 00:06:18.966 [2024-07-15 16:22:58.518961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:71000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.966 [2024-07-15 16:22:58.518986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.966 [2024-07-15 16:22:58.519042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0000830a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.966 [2024-07-15 16:22:58.519059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.966 [2024-07-15 16:22:58.519116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:26000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.966 [2024-07-15 16:22:58.519130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.966 #42 NEW cov: 12219 ft: 15448 corp: 27/321b lim: 30 exec/s: 42 rss: 72Mb L: 21/26 MS: 1 CrossOver- 00:06:18.966 [2024-07-15 16:22:58.558701] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10460) > buf size (4096) 00:06:18.966 [2024-07-15 16:22:58.558821] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:18.966 [2024-07-15 16:22:58.558930] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:18.966 [2024-07-15 16:22:58.559039] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:18.966 [2024-07-15 16:22:58.559153] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:06:18.966 [2024-07-15 16:22:58.559378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a3600e4 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.966 [2024-07-15 16:22:58.559404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.966 [2024-07-15 16:22:58.559464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:000083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.966 [2024-07-15 16:22:58.559479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.966 [2024-07-15 16:22:58.559535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.966 [2024-07-15 16:22:58.559549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.966 [2024-07-15 16:22:58.559606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.966 [2024-07-15 16:22:58.559620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:19.224 [2024-07-15 16:22:58.559678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.224 [2024-07-15 16:22:58.559692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:19.224 #43 NEW cov: 12219 ft: 15556 corp: 28/351b lim: 30 exec/s: 43 rss: 72Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:06:19.224 [2024-07-15 16:22:58.608753] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:19.224 [2024-07-15 16:22:58.608872] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000023 00:06:19.224 [2024-07-15 16:22:58.609096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.224 [2024-07-15 16:22:58.609122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.224 [2024-07-15 16:22:58.609182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.224 [2024-07-15 16:22:58.609196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.224 #44 NEW cov: 12219 ft: 15570 corp: 29/365b lim: 30 exec/s: 44 rss: 72Mb L: 14/30 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:06:19.224 [2024-07-15 16:22:58.658839] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:19.224 [2024-07-15 16:22:58.659056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.224 [2024-07-15 16:22:58.659082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.224 #45 NEW cov: 12219 ft: 15609 corp: 30/374b lim: 30 exec/s: 45 rss: 72Mb L: 9/30 MS: 1 ShuffleBytes- 00:06:19.224 [2024-07-15 16:22:58.709029] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa36 00:06:19.224 [2024-07-15 16:22:58.709151] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (115716) > buf size (4096) 00:06:19.224 [2024-07-15 16:22:58.709361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:71000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.224 [2024-07-15 16:22:58.709387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.224 [2024-07-15 16:22:58.709449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:71000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.224 [2024-07-15 16:22:58.709463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.224 #46 NEW cov: 12219 ft: 15630 corp: 31/389b lim: 30 exec/s: 46 rss: 72Mb L: 15/30 MS: 1 CrossOver- 00:06:19.224 [2024-07-15 16:22:58.749171] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200006666 00:06:19.224 [2024-07-15 16:22:58.749289] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200006666 00:06:19.225 [2024-07-15 16:22:58.749403] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200006666 00:06:19.225 [2024-07-15 16:22:58.749519] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200006666 00:06:19.225 [2024-07-15 16:22:58.749741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff660266 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.225 [2024-07-15 16:22:58.749766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.225 [2024-07-15 16:22:58.749825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:66660266 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.225 [2024-07-15 16:22:58.749839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.225 [2024-07-15 16:22:58.749896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:66660266 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.225 [2024-07-15 16:22:58.749910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.225 [2024-07-15 16:22:58.749966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:66660266 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.225 [2024-07-15 16:22:58.749979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:19.225 #48 NEW cov: 12219 ft: 15651 corp: 32/417b lim: 30 exec/s: 48 rss: 72Mb L: 28/30 MS: 2 EraseBytes-InsertRepeatedBytes- 00:06:19.225 [2024-07-15 16:22:58.799326] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (524288) > buf size (4096) 00:06:19.225 [2024-07-15 16:22:58.799453] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x23 00:06:19.225 [2024-07-15 16:22:58.799704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff81ff cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.225 [2024-07-15 16:22:58.799729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.225 [2024-07-15 16:22:58.799791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.225 [2024-07-15 16:22:58.799805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.483 #49 NEW cov: 12219 ft: 15658 corp: 33/431b lim: 30 exec/s: 49 rss: 72Mb L: 14/30 MS: 1 PersAutoDict- DE: "q\000\000\000\000\000\000\000"- 00:06:19.483 [2024-07-15 16:22:58.849421] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10460) > buf size (4096) 00:06:19.483 [2024-07-15 16:22:58.849761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a360071 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.483 [2024-07-15 16:22:58.849787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.483 [2024-07-15 16:22:58.849839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00080000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.483 [2024-07-15 16:22:58.849854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.483 #50 NEW cov: 12219 ft: 15669 corp: 34/447b lim: 30 exec/s: 50 rss: 72Mb L: 16/30 MS: 1 ChangeBit- 00:06:19.483 [2024-07-15 16:22:58.899588] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10460) > buf size (4096) 00:06:19.483 [2024-07-15 16:22:58.899712] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:19.483 [2024-07-15 16:22:58.899826] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:19.483 [2024-07-15 16:22:58.900042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a3600e4 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.483 [2024-07-15 16:22:58.900067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.483 [2024-07-15 16:22:58.900125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:000083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.483 [2024-07-15 16:22:58.900139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.483 [2024-07-15 16:22:58.900197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.483 [2024-07-15 16:22:58.900210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.483 #51 NEW cov: 12219 ft: 15707 corp: 35/467b lim: 30 exec/s: 51 rss: 72Mb L: 20/30 MS: 1 EraseBytes- 00:06:19.483 [2024-07-15 16:22:58.949857] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (524288) > buf size (4096) 00:06:19.483 [2024-07-15 16:22:58.949978] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x23 00:06:19.483 [2024-07-15 16:22:58.950092] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:19.483 [2024-07-15 16:22:58.950208] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000026 00:06:19.483 [2024-07-15 16:22:58.950431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff81ff cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.483 [2024-07-15 16:22:58.950460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.483 [2024-07-15 16:22:58.950520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.483 [2024-07-15 16:22:58.950534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.483 [2024-07-15 16:22:58.950596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.483 [2024-07-15 16:22:58.950610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.483 [2024-07-15 16:22:58.950666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.483 [2024-07-15 16:22:58.950680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:19.483 #52 NEW cov: 12219 ft: 15715 corp: 36/491b lim: 30 exec/s: 52 rss: 73Mb L: 24/30 MS: 1 InsertRepeatedBytes- 00:06:19.483 [2024-07-15 16:22:58.999772] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:19.483 [2024-07-15 16:22:58.999986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.483 [2024-07-15 16:22:59.000011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.483 #56 NEW cov: 12219 ft: 15732 corp: 37/500b lim: 30 exec/s: 56 rss: 73Mb L: 9/30 MS: 4 CrossOver-ShuffleBytes-ChangeByte-PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:06:19.483 [2024-07-15 16:22:59.039958] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (115716) > buf size (4096) 00:06:19.483 [2024-07-15 16:22:59.040076] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (524292) > buf size (4096) 00:06:19.483 [2024-07-15 16:22:59.040183] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100007575 00:06:19.483 [2024-07-15 16:22:59.040396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:71000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.483 [2024-07-15 16:22:59.040420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.483 [2024-07-15 16:22:59.040475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.483 [2024-07-15 16:22:59.040490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.483 [2024-07-15 16:22:59.040546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:75758175 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.483 [2024-07-15 16:22:59.040559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.483 #57 NEW cov: 12219 ft: 15745 corp: 38/523b lim: 30 exec/s: 57 rss: 73Mb L: 23/30 MS: 1 ChangeByte- 00:06:19.742 [2024-07-15 16:22:59.090169] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x23 00:06:19.742 [2024-07-15 16:22:59.090418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.742 [2024-07-15 16:22:59.090456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.742 [2024-07-15 16:22:59.090514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010071 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.742 [2024-07-15 16:22:59.090528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.742 #58 NEW cov: 12219 ft: 15761 corp: 39/538b lim: 30 exec/s: 58 rss: 73Mb L: 15/30 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\001"- 00:06:19.742 [2024-07-15 16:22:59.140179] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300007126 00:06:19.742 [2024-07-15 16:22:59.140437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.742 [2024-07-15 16:22:59.140470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.742 #59 NEW cov: 12219 ft: 15819 corp: 40/544b lim: 30 exec/s: 29 rss: 73Mb L: 6/30 MS: 1 ShuffleBytes- 00:06:19.742 #59 DONE cov: 12219 ft: 15819 corp: 40/544b lim: 30 exec/s: 29 rss: 73Mb 00:06:19.742 ###### Recommended dictionary. ###### 00:06:19.742 "q\000\000\000\000\000\000\000" # Uses: 2 00:06:19.742 "\377\377\377\377\377\377\377\377" # Uses: 5 00:06:19.742 "\000\000\000\000\000\000\000\001" # Uses: 0 00:06:19.742 ###### End of recommended dictionary. ###### 00:06:19.742 Done 59 runs in 2 second(s) 00:06:19.742 16:22:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:06:19.742 16:22:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:19.742 16:22:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:19.742 16:22:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:06:19.742 16:22:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:06:19.742 16:22:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:19.742 16:22:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:19.742 16:22:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:19.742 16:22:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:06:19.742 16:22:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:19.742 16:22:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:19.742 16:22:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:06:19.742 16:22:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:06:19.742 16:22:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:19.742 16:22:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:06:19.742 16:22:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:19.742 16:22:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:19.742 16:22:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:19.742 16:22:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:06:19.742 [2024-07-15 16:22:59.327148] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:19.742 [2024-07-15 16:22:59.327219] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2023106 ] 00:06:20.041 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.041 [2024-07-15 16:22:59.500283] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.041 [2024-07-15 16:22:59.565356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.042 [2024-07-15 16:22:59.624106] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:20.299 [2024-07-15 16:22:59.640430] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:06:20.299 INFO: Running with entropic power schedule (0xFF, 100). 00:06:20.299 INFO: Seed: 637256466 00:06:20.299 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:20.299 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:20.299 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:20.299 INFO: A corpus is not provided, starting from an empty corpus 00:06:20.299 #2 INITED exec/s: 0 rss: 64Mb 00:06:20.299 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:20.299 This may also happen if the target rejected all inputs we tried so far 00:06:20.299 [2024-07-15 16:22:59.706058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a60000a cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.299 [2024-07-15 16:22:59.706087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.299 [2024-07-15 16:22:59.706143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.299 [2024-07-15 16:22:59.706157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.299 [2024-07-15 16:22:59.706213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.299 [2024-07-15 16:22:59.706226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.299 [2024-07-15 16:22:59.706278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.299 [2024-07-15 16:22:59.706291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.559 NEW_FUNC[1/695]: 0x487230 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:06:20.559 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:20.559 #5 NEW cov: 11885 ft: 11885 corp: 2/34b lim: 35 exec/s: 0 rss: 70Mb L: 33/33 MS: 3 InsertByte-CrossOver-InsertRepeatedBytes- 00:06:20.559 [2024-07-15 16:23:00.037034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a60000a cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.559 [2024-07-15 16:23:00.037091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.559 [2024-07-15 16:23:00.037170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.559 [2024-07-15 16:23:00.037195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.559 [2024-07-15 16:23:00.037270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.559 [2024-07-15 16:23:00.037294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.559 [2024-07-15 16:23:00.037370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.559 [2024-07-15 16:23:00.037395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.559 #6 NEW cov: 12015 ft: 12615 corp: 3/67b lim: 35 exec/s: 0 rss: 70Mb L: 33/33 MS: 1 CrossOver- 00:06:20.559 [2024-07-15 16:23:00.096711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.559 [2024-07-15 16:23:00.096742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.559 [2024-07-15 16:23:00.096798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.559 [2024-07-15 16:23:00.096813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.559 #8 NEW cov: 12021 ft: 13303 corp: 4/87b lim: 35 exec/s: 0 rss: 70Mb L: 20/33 MS: 2 ChangeByte-CrossOver- 00:06:20.559 [2024-07-15 16:23:00.137085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a60000a cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.559 [2024-07-15 16:23:00.137111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.559 [2024-07-15 16:23:00.137165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.559 [2024-07-15 16:23:00.137179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.559 [2024-07-15 16:23:00.137233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ccee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.559 [2024-07-15 16:23:00.137246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.559 [2024-07-15 16:23:00.137300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.559 [2024-07-15 16:23:00.137313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.818 #9 NEW cov: 12106 ft: 13520 corp: 5/121b lim: 35 exec/s: 0 rss: 71Mb L: 34/34 MS: 1 InsertByte- 00:06:20.818 [2024-07-15 16:23:00.187173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a60000a cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.818 [2024-07-15 16:23:00.187196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.818 [2024-07-15 16:23:00.187266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.818 [2024-07-15 16:23:00.187280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.818 [2024-07-15 16:23:00.187334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.818 [2024-07-15 16:23:00.187346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.818 [2024-07-15 16:23:00.187399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.818 [2024-07-15 16:23:00.187412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.818 #10 NEW cov: 12106 ft: 13724 corp: 6/154b lim: 35 exec/s: 0 rss: 71Mb L: 33/34 MS: 1 CopyPart- 00:06:20.818 [2024-07-15 16:23:00.227251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a60000a cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.818 [2024-07-15 16:23:00.227276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.818 [2024-07-15 16:23:00.227331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.818 [2024-07-15 16:23:00.227345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.818 [2024-07-15 16:23:00.227396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.818 [2024-07-15 16:23:00.227413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.818 [2024-07-15 16:23:00.227469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.818 [2024-07-15 16:23:00.227482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.818 #11 NEW cov: 12106 ft: 13770 corp: 7/187b lim: 35 exec/s: 0 rss: 71Mb L: 33/34 MS: 1 CopyPart- 00:06:20.818 [2024-07-15 16:23:00.277431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.818 [2024-07-15 16:23:00.277459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.818 [2024-07-15 16:23:00.277513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.818 [2024-07-15 16:23:00.277527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.818 [2024-07-15 16:23:00.277581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.818 [2024-07-15 16:23:00.277594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.818 [2024-07-15 16:23:00.277648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.818 [2024-07-15 16:23:00.277661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.818 #15 NEW cov: 12106 ft: 13833 corp: 8/218b lim: 35 exec/s: 0 rss: 71Mb L: 31/34 MS: 4 ShuffleBytes-ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:06:20.818 [2024-07-15 16:23:00.317168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:c5c500c5 cdw11:c500c5c5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.818 [2024-07-15 16:23:00.317192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.818 #16 NEW cov: 12106 ft: 14197 corp: 9/227b lim: 35 exec/s: 0 rss: 71Mb L: 9/34 MS: 1 InsertRepeatedBytes- 00:06:20.818 [2024-07-15 16:23:00.357406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.818 [2024-07-15 16:23:00.357431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.818 [2024-07-15 16:23:00.357490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.818 [2024-07-15 16:23:00.357505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.818 #17 NEW cov: 12106 ft: 14239 corp: 10/247b lim: 35 exec/s: 0 rss: 71Mb L: 20/34 MS: 1 ChangeBinInt- 00:06:20.818 [2024-07-15 16:23:00.407784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.818 [2024-07-15 16:23:00.407809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.818 [2024-07-15 16:23:00.407865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.818 [2024-07-15 16:23:00.407880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.818 [2024-07-15 16:23:00.407934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff0031 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.818 [2024-07-15 16:23:00.407951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.818 [2024-07-15 16:23:00.408004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.818 [2024-07-15 16:23:00.408017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.076 #18 NEW cov: 12106 ft: 14337 corp: 11/279b lim: 35 exec/s: 0 rss: 71Mb L: 32/34 MS: 1 InsertByte- 00:06:21.076 [2024-07-15 16:23:00.457967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.076 [2024-07-15 16:23:00.457991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.076 [2024-07-15 16:23:00.458047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.076 [2024-07-15 16:23:00.458061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.076 [2024-07-15 16:23:00.458111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.076 [2024-07-15 16:23:00.458124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.076 [2024-07-15 16:23:00.458177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.076 [2024-07-15 16:23:00.458190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.076 #19 NEW cov: 12106 ft: 14412 corp: 12/312b lim: 35 exec/s: 0 rss: 71Mb L: 33/34 MS: 1 CopyPart- 00:06:21.076 [2024-07-15 16:23:00.498015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.076 [2024-07-15 16:23:00.498039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.077 [2024-07-15 16:23:00.498094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f7ff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.077 [2024-07-15 16:23:00.498107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.077 [2024-07-15 16:23:00.498162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.077 [2024-07-15 16:23:00.498174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.077 [2024-07-15 16:23:00.498227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.077 [2024-07-15 16:23:00.498240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.077 #20 NEW cov: 12106 ft: 14477 corp: 13/345b lim: 35 exec/s: 0 rss: 71Mb L: 33/34 MS: 1 ChangeBit- 00:06:21.077 [2024-07-15 16:23:00.548156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a60000a cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.077 [2024-07-15 16:23:00.548181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.077 [2024-07-15 16:23:00.548234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.077 [2024-07-15 16:23:00.548250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.077 [2024-07-15 16:23:00.548303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ccee00ee cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.077 [2024-07-15 16:23:00.548317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.077 [2024-07-15 16:23:00.548368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ee00ffee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.077 [2024-07-15 16:23:00.548381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.077 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:21.077 #21 NEW cov: 12129 ft: 14532 corp: 14/379b lim: 35 exec/s: 0 rss: 71Mb L: 34/34 MS: 1 CrossOver- 00:06:21.077 [2024-07-15 16:23:00.598164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a60000a cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.077 [2024-07-15 16:23:00.598188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.077 [2024-07-15 16:23:00.598241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:eeee00ee cdw11:0a00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.077 [2024-07-15 16:23:00.598255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.077 [2024-07-15 16:23:00.598308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:eeee0060 cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.077 [2024-07-15 16:23:00.598337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.077 #22 NEW cov: 12129 ft: 14721 corp: 15/403b lim: 35 exec/s: 0 rss: 71Mb L: 24/34 MS: 1 CrossOver- 00:06:21.077 [2024-07-15 16:23:00.648568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a60000a cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.077 [2024-07-15 16:23:00.648594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.077 [2024-07-15 16:23:00.648649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.077 [2024-07-15 16:23:00.648662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.077 [2024-07-15 16:23:00.648717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ccee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.077 [2024-07-15 16:23:00.648730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.077 [2024-07-15 16:23:00.648785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:eeee0060 cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.077 [2024-07-15 16:23:00.648797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.077 [2024-07-15 16:23:00.648848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.077 [2024-07-15 16:23:00.648861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:21.077 #23 NEW cov: 12129 ft: 14774 corp: 16/438b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 CrossOver- 00:06:21.335 [2024-07-15 16:23:00.688563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a60000a cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.335 [2024-07-15 16:23:00.688590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.335 [2024-07-15 16:23:00.688643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.335 [2024-07-15 16:23:00.688656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.335 [2024-07-15 16:23:00.688709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:eeee00e6 cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.335 [2024-07-15 16:23:00.688721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.335 [2024-07-15 16:23:00.688774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.335 [2024-07-15 16:23:00.688787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.335 #24 NEW cov: 12129 ft: 14861 corp: 17/471b lim: 35 exec/s: 24 rss: 71Mb L: 33/35 MS: 1 ChangeBit- 00:06:21.335 [2024-07-15 16:23:00.728568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a60000a cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.335 [2024-07-15 16:23:00.728593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.335 [2024-07-15 16:23:00.728647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.335 [2024-07-15 16:23:00.728661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.335 [2024-07-15 16:23:00.728712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.335 [2024-07-15 16:23:00.728726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.335 #25 NEW cov: 12129 ft: 14899 corp: 18/496b lim: 35 exec/s: 25 rss: 72Mb L: 25/35 MS: 1 EraseBytes- 00:06:21.335 [2024-07-15 16:23:00.778957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a60000a cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.335 [2024-07-15 16:23:00.778981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.335 [2024-07-15 16:23:00.779036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.335 [2024-07-15 16:23:00.779049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.335 [2024-07-15 16:23:00.779101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ccee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.335 [2024-07-15 16:23:00.779114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.335 [2024-07-15 16:23:00.779166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:0dee0060 cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.335 [2024-07-15 16:23:00.779179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.335 [2024-07-15 16:23:00.779232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.335 [2024-07-15 16:23:00.779247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:21.335 #26 NEW cov: 12129 ft: 14915 corp: 19/531b lim: 35 exec/s: 26 rss: 72Mb L: 35/35 MS: 1 ChangeBinInt- 00:06:21.335 [2024-07-15 16:23:00.828949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.335 [2024-07-15 16:23:00.828973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.335 [2024-07-15 16:23:00.829024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.335 [2024-07-15 16:23:00.829037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.335 [2024-07-15 16:23:00.829091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.335 [2024-07-15 16:23:00.829104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.335 [2024-07-15 16:23:00.829155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.335 [2024-07-15 16:23:00.829168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.335 #27 NEW cov: 12129 ft: 14934 corp: 20/562b lim: 35 exec/s: 27 rss: 72Mb L: 31/35 MS: 1 CopyPart- 00:06:21.335 [2024-07-15 16:23:00.869190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.335 [2024-07-15 16:23:00.869213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.335 [2024-07-15 16:23:00.869268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.335 [2024-07-15 16:23:00.869281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.335 [2024-07-15 16:23:00.869332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.335 [2024-07-15 16:23:00.869362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.335 [2024-07-15 16:23:00.869417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.335 [2024-07-15 16:23:00.869430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.335 [2024-07-15 16:23:00.869486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ff1e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.335 [2024-07-15 16:23:00.869499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:21.335 #28 NEW cov: 12129 ft: 14948 corp: 21/597b lim: 35 exec/s: 28 rss: 72Mb L: 35/35 MS: 1 CMP- DE: "\377\377\377\036"- 00:06:21.335 [2024-07-15 16:23:00.908937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.335 [2024-07-15 16:23:00.908960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.335 [2024-07-15 16:23:00.909014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.335 [2024-07-15 16:23:00.909028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.594 #29 NEW cov: 12129 ft: 14987 corp: 22/617b lim: 35 exec/s: 29 rss: 72Mb L: 20/35 MS: 1 ShuffleBytes- 00:06:21.594 [2024-07-15 16:23:00.949295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.594 [2024-07-15 16:23:00.949319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.594 [2024-07-15 16:23:00.949374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.594 [2024-07-15 16:23:00.949387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.594 [2024-07-15 16:23:00.949438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.594 [2024-07-15 16:23:00.949456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.594 [2024-07-15 16:23:00.949525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ff1e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.594 [2024-07-15 16:23:00.949539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.594 #30 NEW cov: 12129 ft: 14992 corp: 23/645b lim: 35 exec/s: 30 rss: 72Mb L: 28/35 MS: 1 EraseBytes- 00:06:21.594 [2024-07-15 16:23:00.999182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.594 [2024-07-15 16:23:00.999206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.594 [2024-07-15 16:23:00.999262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:eeee00ee cdw11:0a00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.594 [2024-07-15 16:23:00.999275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.594 #31 NEW cov: 12129 ft: 15067 corp: 24/665b lim: 35 exec/s: 31 rss: 72Mb L: 20/35 MS: 1 CrossOver- 00:06:21.594 [2024-07-15 16:23:01.039189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:0a001e0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.594 [2024-07-15 16:23:01.039214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.594 #33 NEW cov: 12129 ft: 15090 corp: 25/672b lim: 35 exec/s: 33 rss: 72Mb L: 7/35 MS: 2 CrossOver-PersAutoDict- DE: "\377\377\377\036"- 00:06:21.594 [2024-07-15 16:23:01.089313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:0a001f0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.594 [2024-07-15 16:23:01.089337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.594 #34 NEW cov: 12129 ft: 15155 corp: 26/679b lim: 35 exec/s: 34 rss: 72Mb L: 7/35 MS: 1 ChangeBit- 00:06:21.594 [2024-07-15 16:23:01.139559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.594 [2024-07-15 16:23:01.139583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.594 [2024-07-15 16:23:01.139637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ee0a00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.594 [2024-07-15 16:23:01.139650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.594 #35 NEW cov: 12129 ft: 15167 corp: 27/696b lim: 35 exec/s: 35 rss: 72Mb L: 17/35 MS: 1 EraseBytes- 00:06:21.853 [2024-07-15 16:23:01.189933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.853 [2024-07-15 16:23:01.189957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.853 [2024-07-15 16:23:01.190009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.853 [2024-07-15 16:23:01.190022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.853 [2024-07-15 16:23:01.190073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.853 [2024-07-15 16:23:01.190086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.853 [2024-07-15 16:23:01.190136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ff1e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.853 [2024-07-15 16:23:01.190149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.853 #36 NEW cov: 12129 ft: 15177 corp: 28/724b lim: 35 exec/s: 36 rss: 72Mb L: 28/35 MS: 1 ShuffleBytes- 00:06:21.853 [2024-07-15 16:23:01.239858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.853 [2024-07-15 16:23:01.239883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.853 [2024-07-15 16:23:01.239936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.853 [2024-07-15 16:23:01.239948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.853 #37 NEW cov: 12129 ft: 15191 corp: 29/741b lim: 35 exec/s: 37 rss: 72Mb L: 17/35 MS: 1 EraseBytes- 00:06:21.853 [2024-07-15 16:23:01.290269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a60000a cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.853 [2024-07-15 16:23:01.290292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.853 [2024-07-15 16:23:01.290361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:eeee00ee cdw11:2600eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.853 [2024-07-15 16:23:01.290374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.853 [2024-07-15 16:23:01.290426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.853 [2024-07-15 16:23:01.290439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.853 [2024-07-15 16:23:01.290496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.853 [2024-07-15 16:23:01.290509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.853 #38 NEW cov: 12129 ft: 15255 corp: 30/775b lim: 35 exec/s: 38 rss: 72Mb L: 34/35 MS: 1 InsertByte- 00:06:21.853 [2024-07-15 16:23:01.330354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.853 [2024-07-15 16:23:01.330377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.853 [2024-07-15 16:23:01.330452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.853 [2024-07-15 16:23:01.330466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.853 [2024-07-15 16:23:01.330518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.853 [2024-07-15 16:23:01.330531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.853 [2024-07-15 16:23:01.330583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.853 [2024-07-15 16:23:01.330596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.853 #39 NEW cov: 12129 ft: 15264 corp: 31/806b lim: 35 exec/s: 39 rss: 72Mb L: 31/35 MS: 1 CopyPart- 00:06:21.853 [2024-07-15 16:23:01.380496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a60000a cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.853 [2024-07-15 16:23:01.380519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.853 [2024-07-15 16:23:01.380575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:eeee00ee cdw11:2600eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.853 [2024-07-15 16:23:01.380588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.853 [2024-07-15 16:23:01.380641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.853 [2024-07-15 16:23:01.380654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.853 [2024-07-15 16:23:01.380706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.853 [2024-07-15 16:23:01.380720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.853 #40 NEW cov: 12129 ft: 15286 corp: 32/840b lim: 35 exec/s: 40 rss: 73Mb L: 34/35 MS: 1 ChangeBit- 00:06:21.853 [2024-07-15 16:23:01.430516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.853 [2024-07-15 16:23:01.430540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.853 [2024-07-15 16:23:01.430592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.853 [2024-07-15 16:23:01.430605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.853 [2024-07-15 16:23:01.430654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.853 [2024-07-15 16:23:01.430667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.112 #41 NEW cov: 12129 ft: 15292 corp: 33/865b lim: 35 exec/s: 41 rss: 73Mb L: 25/35 MS: 1 EraseBytes- 00:06:22.112 [2024-07-15 16:23:01.470667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.112 [2024-07-15 16:23:01.470692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.112 [2024-07-15 16:23:01.470746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.112 [2024-07-15 16:23:01.470759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.112 [2024-07-15 16:23:01.470808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffee00ff cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.112 [2024-07-15 16:23:01.470822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.112 #42 NEW cov: 12129 ft: 15315 corp: 34/888b lim: 35 exec/s: 42 rss: 73Mb L: 23/35 MS: 1 CrossOver- 00:06:22.112 [2024-07-15 16:23:01.521003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a60000a cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.112 [2024-07-15 16:23:01.521026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.112 [2024-07-15 16:23:01.521095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:eeee00ee cdw11:2600eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.112 [2024-07-15 16:23:01.521109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.112 [2024-07-15 16:23:01.521160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:eeee00ce cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.112 [2024-07-15 16:23:01.521173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.112 [2024-07-15 16:23:01.521224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.112 [2024-07-15 16:23:01.521237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:22.112 [2024-07-15 16:23:01.521287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.112 [2024-07-15 16:23:01.521300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:22.112 #43 NEW cov: 12129 ft: 15385 corp: 35/923b lim: 35 exec/s: 43 rss: 73Mb L: 35/35 MS: 1 CrossOver- 00:06:22.112 [2024-07-15 16:23:01.560652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:c5c500c5 cdw11:c500c5c5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.112 [2024-07-15 16:23:01.560677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.112 #44 NEW cov: 12129 ft: 15391 corp: 36/931b lim: 35 exec/s: 44 rss: 73Mb L: 8/35 MS: 1 EraseBytes- 00:06:22.112 [2024-07-15 16:23:01.610916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.112 [2024-07-15 16:23:01.610940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.113 [2024-07-15 16:23:01.610994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ee0a00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.113 [2024-07-15 16:23:01.611008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.113 #45 NEW cov: 12129 ft: 15411 corp: 37/948b lim: 35 exec/s: 45 rss: 73Mb L: 17/35 MS: 1 ChangeByte- 00:06:22.113 [2024-07-15 16:23:01.660912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:c5c500c5 cdw11:c500c5c5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.113 [2024-07-15 16:23:01.660935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.113 #46 NEW cov: 12129 ft: 15437 corp: 38/957b lim: 35 exec/s: 46 rss: 73Mb L: 9/35 MS: 1 ChangeBit- 00:06:22.113 [2024-07-15 16:23:01.701398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a60000a cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.113 [2024-07-15 16:23:01.701422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.113 [2024-07-15 16:23:01.701482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.113 [2024-07-15 16:23:01.701496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.113 [2024-07-15 16:23:01.701550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:e8ee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.113 [2024-07-15 16:23:01.701564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.113 [2024-07-15 16:23:01.701618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:eeee00ee cdw11:ee00eeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.113 [2024-07-15 16:23:01.701631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:22.372 #47 NEW cov: 12129 ft: 15441 corp: 39/990b lim: 35 exec/s: 23 rss: 73Mb L: 33/35 MS: 1 ChangeBinInt- 00:06:22.372 #47 DONE cov: 12129 ft: 15441 corp: 39/990b lim: 35 exec/s: 23 rss: 73Mb 00:06:22.372 ###### Recommended dictionary. ###### 00:06:22.372 "\377\377\377\036" # Uses: 1 00:06:22.372 ###### End of recommended dictionary. ###### 00:06:22.372 Done 47 runs in 2 second(s) 00:06:22.372 16:23:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:06:22.372 16:23:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:22.372 16:23:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:22.372 16:23:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:06:22.372 16:23:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:06:22.372 16:23:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:22.372 16:23:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:22.372 16:23:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:22.372 16:23:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:06:22.372 16:23:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:22.372 16:23:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:22.372 16:23:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:06:22.372 16:23:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:06:22.372 16:23:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:22.372 16:23:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:06:22.372 16:23:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:22.372 16:23:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:22.372 16:23:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:22.372 16:23:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:06:22.372 [2024-07-15 16:23:01.887975] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:22.372 [2024-07-15 16:23:01.888054] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2023402 ] 00:06:22.372 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.631 [2024-07-15 16:23:02.066870] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.631 [2024-07-15 16:23:02.132925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.631 [2024-07-15 16:23:02.191980] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:22.631 [2024-07-15 16:23:02.208285] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:06:22.631 INFO: Running with entropic power schedule (0xFF, 100). 00:06:22.631 INFO: Seed: 3205266849 00:06:22.890 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:22.890 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:22.890 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:22.890 INFO: A corpus is not provided, starting from an empty corpus 00:06:22.890 #2 INITED exec/s: 0 rss: 64Mb 00:06:22.890 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:22.890 This may also happen if the target rejected all inputs we tried so far 00:06:23.149 NEW_FUNC[1/684]: 0x488f00 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:06:23.149 NEW_FUNC[2/684]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:23.149 #14 NEW cov: 11799 ft: 11800 corp: 2/13b lim: 20 exec/s: 0 rss: 70Mb L: 12/12 MS: 2 CrossOver-InsertRepeatedBytes- 00:06:23.149 [2024-07-15 16:23:02.625629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:23.149 [2024-07-15 16:23:02.625680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.149 NEW_FUNC[1/20]: 0x11db1b0 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3359 00:06:23.149 NEW_FUNC[2/20]: 0x11dbd30 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3301 00:06:23.149 #20 NEW cov: 12251 ft: 12725 corp: 3/27b lim: 20 exec/s: 0 rss: 70Mb L: 14/14 MS: 1 InsertRepeatedBytes- 00:06:23.149 [2024-07-15 16:23:02.675691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:23.149 [2024-07-15 16:23:02.675725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.149 #36 NEW cov: 12257 ft: 13034 corp: 4/41b lim: 20 exec/s: 0 rss: 71Mb L: 14/14 MS: 1 ChangeBinInt- 00:06:23.149 [2024-07-15 16:23:02.725790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:23.149 [2024-07-15 16:23:02.725825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.408 #37 NEW cov: 12342 ft: 13266 corp: 5/56b lim: 20 exec/s: 0 rss: 71Mb L: 15/15 MS: 1 InsertByte- 00:06:23.408 [2024-07-15 16:23:02.775907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:23.408 [2024-07-15 16:23:02.775938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.408 NEW_FUNC[1/2]: 0x1174940 in nvmf_ctrlr_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3432 00:06:23.408 NEW_FUNC[2/2]: 0x1175550 in spdk_nvmf_request_get_bdev /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:4923 00:06:23.408 #38 NEW cov: 12372 ft: 13429 corp: 6/71b lim: 20 exec/s: 0 rss: 71Mb L: 15/15 MS: 1 ChangeBinInt- 00:06:23.408 [2024-07-15 16:23:02.826182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:23.408 [2024-07-15 16:23:02.826212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.408 #39 NEW cov: 12375 ft: 13540 corp: 7/85b lim: 20 exec/s: 0 rss: 71Mb L: 14/15 MS: 1 CMP- DE: "\001\002"- 00:06:23.408 [2024-07-15 16:23:02.865947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:23.408 [2024-07-15 16:23:02.865978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.408 #40 NEW cov: 12375 ft: 13659 corp: 8/100b lim: 20 exec/s: 0 rss: 71Mb L: 15/15 MS: 1 ChangeByte- 00:06:23.408 [2024-07-15 16:23:02.906333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:23.408 [2024-07-15 16:23:02.906365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.408 #41 NEW cov: 12375 ft: 13785 corp: 9/114b lim: 20 exec/s: 0 rss: 71Mb L: 14/15 MS: 1 ShuffleBytes- 00:06:23.408 [2024-07-15 16:23:02.946512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:23.408 [2024-07-15 16:23:02.946541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.408 #42 NEW cov: 12375 ft: 13809 corp: 10/129b lim: 20 exec/s: 0 rss: 71Mb L: 15/15 MS: 1 CrossOver- 00:06:23.667 #43 NEW cov: 12375 ft: 13943 corp: 11/143b lim: 20 exec/s: 0 rss: 71Mb L: 14/15 MS: 1 ChangeByte- 00:06:23.667 #44 NEW cov: 12375 ft: 13971 corp: 12/155b lim: 20 exec/s: 0 rss: 71Mb L: 12/15 MS: 1 ChangeByte- 00:06:23.667 [2024-07-15 16:23:03.096857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:23.667 [2024-07-15 16:23:03.096886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.667 #45 NEW cov: 12375 ft: 14023 corp: 13/170b lim: 20 exec/s: 0 rss: 71Mb L: 15/15 MS: 1 PersAutoDict- DE: "\001\002"- 00:06:23.667 [2024-07-15 16:23:03.137191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:23.667 [2024-07-15 16:23:03.137219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.667 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:23.667 #46 NEW cov: 12398 ft: 14233 corp: 14/184b lim: 20 exec/s: 0 rss: 71Mb L: 14/15 MS: 1 PersAutoDict- DE: "\001\002"- 00:06:23.667 #47 NEW cov: 12398 ft: 14307 corp: 15/196b lim: 20 exec/s: 0 rss: 71Mb L: 12/15 MS: 1 ChangeBinInt- 00:06:23.667 [2024-07-15 16:23:03.217000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:23.667 [2024-07-15 16:23:03.217030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.667 #48 NEW cov: 12399 ft: 14566 corp: 16/207b lim: 20 exec/s: 0 rss: 71Mb L: 11/15 MS: 1 EraseBytes- 00:06:23.926 #49 NEW cov: 12416 ft: 14726 corp: 17/224b lim: 20 exec/s: 49 rss: 72Mb L: 17/17 MS: 1 CopyPart- 00:06:23.926 [2024-07-15 16:23:03.307809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:23.926 [2024-07-15 16:23:03.307838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.926 #50 NEW cov: 12416 ft: 14816 corp: 18/240b lim: 20 exec/s: 50 rss: 72Mb L: 16/17 MS: 1 CrossOver- 00:06:23.926 #52 NEW cov: 12416 ft: 15134 corp: 19/244b lim: 20 exec/s: 52 rss: 72Mb L: 4/17 MS: 2 CrossOver-InsertByte- 00:06:23.926 [2024-07-15 16:23:03.397733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:23.926 [2024-07-15 16:23:03.397763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.926 #53 NEW cov: 12416 ft: 15148 corp: 20/258b lim: 20 exec/s: 53 rss: 72Mb L: 14/17 MS: 1 ChangeBinInt- 00:06:23.926 [2024-07-15 16:23:03.437556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:23.926 [2024-07-15 16:23:03.437587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.926 #54 NEW cov: 12416 ft: 15162 corp: 21/269b lim: 20 exec/s: 54 rss: 72Mb L: 11/17 MS: 1 ChangeBit- 00:06:23.926 [2024-07-15 16:23:03.487650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:23.926 [2024-07-15 16:23:03.487679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.926 #55 NEW cov: 12416 ft: 15216 corp: 22/283b lim: 20 exec/s: 55 rss: 72Mb L: 14/17 MS: 1 ChangeBinInt- 00:06:24.184 [2024-07-15 16:23:03.538484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:24.184 [2024-07-15 16:23:03.538514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.184 #56 NEW cov: 12416 ft: 15268 corp: 23/299b lim: 20 exec/s: 56 rss: 72Mb L: 16/17 MS: 1 PersAutoDict- DE: "\001\002"- 00:06:24.184 #57 NEW cov: 12416 ft: 15285 corp: 24/306b lim: 20 exec/s: 57 rss: 72Mb L: 7/17 MS: 1 EraseBytes- 00:06:24.184 [2024-07-15 16:23:03.648464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:24.184 [2024-07-15 16:23:03.648495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.184 #58 NEW cov: 12416 ft: 15318 corp: 25/321b lim: 20 exec/s: 58 rss: 72Mb L: 15/17 MS: 1 ChangeByte- 00:06:24.184 #59 NEW cov: 12416 ft: 15347 corp: 26/328b lim: 20 exec/s: 59 rss: 72Mb L: 7/17 MS: 1 ChangeBinInt- 00:06:24.184 [2024-07-15 16:23:03.748708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:24.184 [2024-07-15 16:23:03.748740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.184 #60 NEW cov: 12416 ft: 15352 corp: 27/342b lim: 20 exec/s: 60 rss: 72Mb L: 14/17 MS: 1 ChangeByte- 00:06:24.443 [2024-07-15 16:23:03.799270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:24.443 [2024-07-15 16:23:03.799304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.443 #61 NEW cov: 12416 ft: 15359 corp: 28/361b lim: 20 exec/s: 61 rss: 72Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:06:24.443 [2024-07-15 16:23:03.858928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:24.443 [2024-07-15 16:23:03.858958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.443 #62 NEW cov: 12416 ft: 15381 corp: 29/376b lim: 20 exec/s: 62 rss: 72Mb L: 15/19 MS: 1 ShuffleBytes- 00:06:24.443 #63 NEW cov: 12416 ft: 15388 corp: 30/381b lim: 20 exec/s: 63 rss: 73Mb L: 5/19 MS: 1 CopyPart- 00:06:24.443 [2024-07-15 16:23:03.959383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:24.443 [2024-07-15 16:23:03.959417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.443 #64 NEW cov: 12416 ft: 15402 corp: 31/396b lim: 20 exec/s: 64 rss: 73Mb L: 15/19 MS: 1 ChangeByte- 00:06:24.703 #65 NEW cov: 12416 ft: 15416 corp: 32/407b lim: 20 exec/s: 65 rss: 73Mb L: 11/19 MS: 1 CrossOver- 00:06:24.703 [2024-07-15 16:23:04.059233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:24.703 [2024-07-15 16:23:04.059262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.703 #66 NEW cov: 12416 ft: 15471 corp: 33/413b lim: 20 exec/s: 66 rss: 73Mb L: 6/19 MS: 1 EraseBytes- 00:06:24.703 [2024-07-15 16:23:04.109789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:24.703 [2024-07-15 16:23:04.109818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.703 #67 NEW cov: 12416 ft: 15492 corp: 34/427b lim: 20 exec/s: 67 rss: 73Mb L: 14/19 MS: 1 CrossOver- 00:06:24.703 [2024-07-15 16:23:04.150061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:24.703 [2024-07-15 16:23:04.150091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.703 #68 NEW cov: 12416 ft: 15504 corp: 35/442b lim: 20 exec/s: 68 rss: 73Mb L: 15/19 MS: 1 ChangeBit- 00:06:24.703 #69 NEW cov: 12416 ft: 15518 corp: 36/457b lim: 20 exec/s: 69 rss: 73Mb L: 15/19 MS: 1 CopyPart- 00:06:24.703 [2024-07-15 16:23:04.230209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:24.703 [2024-07-15 16:23:04.230238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.703 #70 NEW cov: 12416 ft: 15542 corp: 37/471b lim: 20 exec/s: 35 rss: 73Mb L: 14/19 MS: 1 ChangeBinInt- 00:06:24.703 #70 DONE cov: 12416 ft: 15542 corp: 37/471b lim: 20 exec/s: 35 rss: 73Mb 00:06:24.703 ###### Recommended dictionary. ###### 00:06:24.703 "\001\002" # Uses: 3 00:06:24.703 ###### End of recommended dictionary. ###### 00:06:24.703 Done 70 runs in 2 second(s) 00:06:24.963 16:23:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:06:24.963 16:23:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:24.963 16:23:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:24.963 16:23:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:06:24.963 16:23:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:06:24.963 16:23:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:24.963 16:23:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:24.963 16:23:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:24.963 16:23:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:06:24.963 16:23:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:24.963 16:23:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:24.963 16:23:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:06:24.963 16:23:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:06:24.963 16:23:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:24.963 16:23:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:06:24.963 16:23:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:24.963 16:23:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:24.963 16:23:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:24.963 16:23:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:06:24.963 [2024-07-15 16:23:04.432191] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:24.963 [2024-07-15 16:23:04.432258] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2023934 ] 00:06:24.963 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.222 [2024-07-15 16:23:04.611154] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.222 [2024-07-15 16:23:04.676561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.222 [2024-07-15 16:23:04.735725] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:25.222 [2024-07-15 16:23:04.752010] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:06:25.222 INFO: Running with entropic power schedule (0xFF, 100). 00:06:25.222 INFO: Seed: 1454283270 00:06:25.222 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:25.222 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:25.222 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:25.222 INFO: A corpus is not provided, starting from an empty corpus 00:06:25.222 #2 INITED exec/s: 0 rss: 63Mb 00:06:25.222 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:25.222 This may also happen if the target rejected all inputs we tried so far 00:06:25.481 [2024-07-15 16:23:04.817291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.481 [2024-07-15 16:23:04.817319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.739 NEW_FUNC[1/696]: 0x489ff0 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:06:25.739 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:25.739 #8 NEW cov: 11906 ft: 11907 corp: 2/10b lim: 35 exec/s: 0 rss: 70Mb L: 9/9 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:06:25.739 [2024-07-15 16:23:05.158250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.739 [2024-07-15 16:23:05.158290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.739 #9 NEW cov: 12036 ft: 12578 corp: 3/20b lim: 35 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 CrossOver- 00:06:25.739 [2024-07-15 16:23:05.218611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00e90000 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.739 [2024-07-15 16:23:05.218637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.739 [2024-07-15 16:23:05.218691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.739 [2024-07-15 16:23:05.218704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.739 [2024-07-15 16:23:05.218757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:0000e9e9 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.740 [2024-07-15 16:23:05.218770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.740 #20 NEW cov: 12042 ft: 13578 corp: 4/43b lim: 35 exec/s: 0 rss: 71Mb L: 23/23 MS: 1 InsertRepeatedBytes- 00:06:25.740 [2024-07-15 16:23:05.268399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:000a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.740 [2024-07-15 16:23:05.268424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.740 #21 NEW cov: 12127 ft: 13903 corp: 5/51b lim: 35 exec/s: 0 rss: 71Mb L: 8/23 MS: 1 EraseBytes- 00:06:25.740 [2024-07-15 16:23:05.308518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:47470a47 cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.740 [2024-07-15 16:23:05.308544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.740 #22 NEW cov: 12127 ft: 14051 corp: 6/59b lim: 35 exec/s: 0 rss: 71Mb L: 8/23 MS: 1 InsertRepeatedBytes- 00:06:25.998 [2024-07-15 16:23:05.348671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:000a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.998 [2024-07-15 16:23:05.348696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.998 #23 NEW cov: 12127 ft: 14136 corp: 7/68b lim: 35 exec/s: 0 rss: 71Mb L: 9/23 MS: 1 CopyPart- 00:06:25.998 [2024-07-15 16:23:05.399095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.998 [2024-07-15 16:23:05.399121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.998 [2024-07-15 16:23:05.399178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.998 [2024-07-15 16:23:05.399191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.998 [2024-07-15 16:23:05.399245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.998 [2024-07-15 16:23:05.399259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.998 #35 NEW cov: 12127 ft: 14171 corp: 8/92b lim: 35 exec/s: 0 rss: 71Mb L: 24/24 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:25.998 [2024-07-15 16:23:05.439355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.998 [2024-07-15 16:23:05.439380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.998 [2024-07-15 16:23:05.439436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.998 [2024-07-15 16:23:05.439463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.998 [2024-07-15 16:23:05.439534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.998 [2024-07-15 16:23:05.439548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.998 [2024-07-15 16:23:05.439604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:11111111 cdw11:11000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.998 [2024-07-15 16:23:05.439617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.998 #36 NEW cov: 12127 ft: 14496 corp: 9/122b lim: 35 exec/s: 0 rss: 71Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:06:25.998 [2024-07-15 16:23:05.479468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.998 [2024-07-15 16:23:05.479496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.999 [2024-07-15 16:23:05.479555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.999 [2024-07-15 16:23:05.479569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.999 [2024-07-15 16:23:05.479625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.999 [2024-07-15 16:23:05.479639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.999 [2024-07-15 16:23:05.479698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.999 [2024-07-15 16:23:05.479711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.999 #37 NEW cov: 12127 ft: 14642 corp: 10/155b lim: 35 exec/s: 0 rss: 71Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:06:25.999 [2024-07-15 16:23:05.529456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.999 [2024-07-15 16:23:05.529481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.999 [2024-07-15 16:23:05.529556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.999 [2024-07-15 16:23:05.529570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.999 [2024-07-15 16:23:05.529627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.999 [2024-07-15 16:23:05.529640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.999 #38 NEW cov: 12127 ft: 14717 corp: 11/180b lim: 35 exec/s: 0 rss: 71Mb L: 25/33 MS: 1 CrossOver- 00:06:25.999 [2024-07-15 16:23:05.569236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:1f75012b cdw11:059a0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.999 [2024-07-15 16:23:05.569261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.999 #39 NEW cov: 12127 ft: 14803 corp: 12/189b lim: 35 exec/s: 0 rss: 71Mb L: 9/33 MS: 1 CMP- DE: "\001+\037u\005\232\315\374"- 00:06:26.258 [2024-07-15 16:23:05.609376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000400 cdw11:000a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.258 [2024-07-15 16:23:05.609401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.258 #43 NEW cov: 12127 ft: 14860 corp: 13/202b lim: 35 exec/s: 0 rss: 71Mb L: 13/33 MS: 4 EraseBytes-ChangeBinInt-InsertByte-CrossOver- 00:06:26.258 [2024-07-15 16:23:05.659811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:11110a11 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.258 [2024-07-15 16:23:05.659835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.258 [2024-07-15 16:23:05.659909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.258 [2024-07-15 16:23:05.659923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.258 [2024-07-15 16:23:05.659982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:11000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.258 [2024-07-15 16:23:05.659996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.258 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:26.258 #44 NEW cov: 12150 ft: 14870 corp: 14/225b lim: 35 exec/s: 0 rss: 72Mb L: 23/33 MS: 1 CrossOver- 00:06:26.258 [2024-07-15 16:23:05.699959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.258 [2024-07-15 16:23:05.699985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.258 [2024-07-15 16:23:05.700042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.258 [2024-07-15 16:23:05.700056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.258 [2024-07-15 16:23:05.700125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.258 [2024-07-15 16:23:05.700139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.258 #45 NEW cov: 12150 ft: 14884 corp: 15/247b lim: 35 exec/s: 0 rss: 72Mb L: 22/33 MS: 1 EraseBytes- 00:06:26.258 [2024-07-15 16:23:05.750039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.258 [2024-07-15 16:23:05.750064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.258 [2024-07-15 16:23:05.750122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.258 [2024-07-15 16:23:05.750136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.258 [2024-07-15 16:23:05.750193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.258 [2024-07-15 16:23:05.750206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.258 #51 NEW cov: 12150 ft: 14961 corp: 16/272b lim: 35 exec/s: 0 rss: 72Mb L: 25/33 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:06:26.258 [2024-07-15 16:23:05.790222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:11110a11 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.258 [2024-07-15 16:23:05.790247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.258 [2024-07-15 16:23:05.790305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.258 [2024-07-15 16:23:05.790317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.258 [2024-07-15 16:23:05.790372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:11000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.258 [2024-07-15 16:23:05.790385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.258 #52 NEW cov: 12150 ft: 14973 corp: 17/295b lim: 35 exec/s: 52 rss: 72Mb L: 23/33 MS: 1 ShuffleBytes- 00:06:26.258 [2024-07-15 16:23:05.840543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:10000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.258 [2024-07-15 16:23:05.840567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.258 [2024-07-15 16:23:05.840625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00110000 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.258 [2024-07-15 16:23:05.840639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.258 [2024-07-15 16:23:05.840693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.258 [2024-07-15 16:23:05.840706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.258 [2024-07-15 16:23:05.840759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:11111111 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.258 [2024-07-15 16:23:05.840772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.517 #53 NEW cov: 12150 ft: 14987 corp: 18/329b lim: 35 exec/s: 53 rss: 72Mb L: 34/34 MS: 1 CMP- DE: "\377\377\377\020"- 00:06:26.517 [2024-07-15 16:23:05.890656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.517 [2024-07-15 16:23:05.890680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.517 [2024-07-15 16:23:05.890737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.517 [2024-07-15 16:23:05.890750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.517 [2024-07-15 16:23:05.890805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.517 [2024-07-15 16:23:05.890819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.517 [2024-07-15 16:23:05.890872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:11111111 cdw11:11000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.517 [2024-07-15 16:23:05.890886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.517 #54 NEW cov: 12150 ft: 14999 corp: 19/359b lim: 35 exec/s: 54 rss: 72Mb L: 30/34 MS: 1 CopyPart- 00:06:26.517 [2024-07-15 16:23:05.930598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.517 [2024-07-15 16:23:05.930623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.517 [2024-07-15 16:23:05.930696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.517 [2024-07-15 16:23:05.930711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.517 [2024-07-15 16:23:05.930768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.517 [2024-07-15 16:23:05.930781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.517 #55 NEW cov: 12150 ft: 15010 corp: 20/384b lim: 35 exec/s: 55 rss: 72Mb L: 25/34 MS: 1 CopyPart- 00:06:26.517 [2024-07-15 16:23:05.970404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:47474747 cdw11:47470000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.517 [2024-07-15 16:23:05.970427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.517 #56 NEW cov: 12150 ft: 15040 corp: 21/392b lim: 35 exec/s: 56 rss: 72Mb L: 8/34 MS: 1 ShuffleBytes- 00:06:26.517 [2024-07-15 16:23:06.020563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0000 cdw11:000b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.517 [2024-07-15 16:23:06.020589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.517 #57 NEW cov: 12150 ft: 15056 corp: 22/401b lim: 35 exec/s: 57 rss: 72Mb L: 9/34 MS: 1 CMP- DE: "\377\377\000\013"- 00:06:26.517 [2024-07-15 16:23:06.060982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:47470a47 cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.517 [2024-07-15 16:23:06.061007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.517 [2024-07-15 16:23:06.061061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2a2a472a cdw11:2a2a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.517 [2024-07-15 16:23:06.061074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.517 [2024-07-15 16:23:06.061129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2a2a2a2a cdw11:2a2a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.517 [2024-07-15 16:23:06.061143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.517 #58 NEW cov: 12150 ft: 15094 corp: 23/422b lim: 35 exec/s: 58 rss: 72Mb L: 21/34 MS: 1 InsertRepeatedBytes- 00:06:26.517 [2024-07-15 16:23:06.100791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:1f75012b cdw11:059a0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.517 [2024-07-15 16:23:06.100816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.778 #59 NEW cov: 12150 ft: 15098 corp: 24/430b lim: 35 exec/s: 59 rss: 72Mb L: 8/34 MS: 1 PersAutoDict- DE: "\001+\037u\005\232\315\374"- 00:06:26.778 [2024-07-15 16:23:06.151427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.778 [2024-07-15 16:23:06.151456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.778 [2024-07-15 16:23:06.151511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.778 [2024-07-15 16:23:06.151525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.778 [2024-07-15 16:23:06.151582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ff5dffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.778 [2024-07-15 16:23:06.151595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.778 [2024-07-15 16:23:06.151653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.778 [2024-07-15 16:23:06.151666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.778 #60 NEW cov: 12150 ft: 15117 corp: 25/463b lim: 35 exec/s: 60 rss: 72Mb L: 33/34 MS: 1 ChangeByte- 00:06:26.778 [2024-07-15 16:23:06.201073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:000a0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.778 [2024-07-15 16:23:06.201100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.778 #61 NEW cov: 12150 ft: 15166 corp: 26/472b lim: 35 exec/s: 61 rss: 72Mb L: 9/34 MS: 1 ShuffleBytes- 00:06:26.778 [2024-07-15 16:23:06.241515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.778 [2024-07-15 16:23:06.241539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.778 [2024-07-15 16:23:06.241596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.778 [2024-07-15 16:23:06.241609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.778 [2024-07-15 16:23:06.241679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:41000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.778 [2024-07-15 16:23:06.241693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.778 #62 NEW cov: 12150 ft: 15168 corp: 27/497b lim: 35 exec/s: 62 rss: 72Mb L: 25/34 MS: 1 ChangeByte- 00:06:26.778 [2024-07-15 16:23:06.291348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.778 [2024-07-15 16:23:06.291373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.778 #63 NEW cov: 12150 ft: 15261 corp: 28/506b lim: 35 exec/s: 63 rss: 72Mb L: 9/34 MS: 1 ShuffleBytes- 00:06:26.778 [2024-07-15 16:23:06.331822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:10110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.778 [2024-07-15 16:23:06.331847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.778 [2024-07-15 16:23:06.331903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.778 [2024-07-15 16:23:06.331916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.778 [2024-07-15 16:23:06.331969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.778 [2024-07-15 16:23:06.331984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.778 #64 NEW cov: 12150 ft: 15298 corp: 29/533b lim: 35 exec/s: 64 rss: 72Mb L: 27/34 MS: 1 PersAutoDict- DE: "\377\377\377\020"- 00:06:27.064 [2024-07-15 16:23:06.382012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.064 [2024-07-15 16:23:06.382037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.064 [2024-07-15 16:23:06.382093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.064 [2024-07-15 16:23:06.382107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.064 [2024-07-15 16:23:06.382163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.064 [2024-07-15 16:23:06.382175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.064 #65 NEW cov: 12150 ft: 15314 corp: 30/557b lim: 35 exec/s: 65 rss: 72Mb L: 24/34 MS: 1 ShuffleBytes- 00:06:27.064 [2024-07-15 16:23:06.422030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.064 [2024-07-15 16:23:06.422053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.064 [2024-07-15 16:23:06.422108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffff2eff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.064 [2024-07-15 16:23:06.422122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.064 [2024-07-15 16:23:06.422176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00410000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.064 [2024-07-15 16:23:06.422189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.064 #66 NEW cov: 12150 ft: 15323 corp: 31/583b lim: 35 exec/s: 66 rss: 73Mb L: 26/34 MS: 1 InsertByte- 00:06:27.064 [2024-07-15 16:23:06.472547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0000 cdw11:000b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.064 [2024-07-15 16:23:06.472572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.064 [2024-07-15 16:23:06.472628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9000000 cdw11:00e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.064 [2024-07-15 16:23:06.472641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.064 [2024-07-15 16:23:06.472696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e90ae9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.064 [2024-07-15 16:23:06.472709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.064 [2024-07-15 16:23:06.472764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.064 [2024-07-15 16:23:06.472776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.064 #67 NEW cov: 12150 ft: 15396 corp: 32/615b lim: 35 exec/s: 67 rss: 73Mb L: 32/34 MS: 1 CrossOver- 00:06:27.064 [2024-07-15 16:23:06.522473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ff2d00ff cdw11:10000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.064 [2024-07-15 16:23:06.522497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.064 [2024-07-15 16:23:06.522551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00110000 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.064 [2024-07-15 16:23:06.522564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.064 [2024-07-15 16:23:06.522634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.064 [2024-07-15 16:23:06.522647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.064 [2024-07-15 16:23:06.522705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:11111111 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.064 [2024-07-15 16:23:06.522718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.064 #68 NEW cov: 12150 ft: 15407 corp: 33/649b lim: 35 exec/s: 68 rss: 73Mb L: 34/34 MS: 1 ChangeByte- 00:06:27.064 [2024-07-15 16:23:06.572641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e9e900ff cdw11:e9000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.064 [2024-07-15 16:23:06.572666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.064 [2024-07-15 16:23:06.572725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0a110000 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.064 [2024-07-15 16:23:06.572739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.064 [2024-07-15 16:23:06.572794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.064 [2024-07-15 16:23:06.572807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.064 [2024-07-15 16:23:06.572864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:11111111 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.064 [2024-07-15 16:23:06.572878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.064 #69 NEW cov: 12150 ft: 15414 corp: 34/683b lim: 35 exec/s: 69 rss: 73Mb L: 34/34 MS: 1 CrossOver- 00:06:27.064 [2024-07-15 16:23:06.612595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.064 [2024-07-15 16:23:06.612620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.064 [2024-07-15 16:23:06.612673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.064 [2024-07-15 16:23:06.612686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.064 [2024-07-15 16:23:06.612740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.064 [2024-07-15 16:23:06.612753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.341 #70 NEW cov: 12150 ft: 15422 corp: 35/708b lim: 35 exec/s: 70 rss: 73Mb L: 25/34 MS: 1 InsertByte- 00:06:27.341 [2024-07-15 16:23:06.662425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:241f012b cdw11:75050001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.341 [2024-07-15 16:23:06.662459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.341 #71 NEW cov: 12150 ft: 15525 corp: 36/718b lim: 35 exec/s: 71 rss: 73Mb L: 10/34 MS: 1 InsertByte- 00:06:27.341 [2024-07-15 16:23:06.713030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e9e900ff cdw11:e9000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.341 [2024-07-15 16:23:06.713055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.341 [2024-07-15 16:23:06.713112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0a110000 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.341 [2024-07-15 16:23:06.713125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.341 [2024-07-15 16:23:06.713181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.341 [2024-07-15 16:23:06.713197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.341 [2024-07-15 16:23:06.713251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:11111111 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.341 [2024-07-15 16:23:06.713264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.341 #72 NEW cov: 12150 ft: 15527 corp: 37/752b lim: 35 exec/s: 72 rss: 73Mb L: 34/34 MS: 1 ChangeByte- 00:06:27.341 [2024-07-15 16:23:06.762894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:10110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.341 [2024-07-15 16:23:06.762919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.341 [2024-07-15 16:23:06.762975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2b1f1101 cdw11:75050001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.341 [2024-07-15 16:23:06.762989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.341 #73 NEW cov: 12150 ft: 15738 corp: 38/768b lim: 35 exec/s: 36 rss: 73Mb L: 16/34 MS: 1 CrossOver- 00:06:27.341 #73 DONE cov: 12150 ft: 15738 corp: 38/768b lim: 35 exec/s: 36 rss: 73Mb 00:06:27.341 ###### Recommended dictionary. ###### 00:06:27.341 "\000\000\000\000\000\000\000\000" # Uses: 1 00:06:27.341 "\001+\037u\005\232\315\374" # Uses: 1 00:06:27.341 "\377\377\377\020" # Uses: 1 00:06:27.341 "\377\377\000\013" # Uses: 0 00:06:27.341 ###### End of recommended dictionary. ###### 00:06:27.341 Done 73 runs in 2 second(s) 00:06:27.341 16:23:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:06:27.341 16:23:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:27.341 16:23:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:27.341 16:23:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:06:27.341 16:23:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:06:27.341 16:23:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:27.341 16:23:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:27.341 16:23:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:27.341 16:23:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:06:27.341 16:23:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:27.341 16:23:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:27.341 16:23:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:06:27.341 16:23:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:06:27.341 16:23:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:27.341 16:23:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:06:27.341 16:23:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:27.600 16:23:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:27.600 16:23:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:27.600 16:23:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:06:27.600 [2024-07-15 16:23:06.967629] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:27.600 [2024-07-15 16:23:06.967723] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2024381 ] 00:06:27.600 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.600 [2024-07-15 16:23:07.152814] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.860 [2024-07-15 16:23:07.222910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.860 [2024-07-15 16:23:07.282129] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:27.860 [2024-07-15 16:23:07.298456] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:06:27.860 INFO: Running with entropic power schedule (0xFF, 100). 00:06:27.860 INFO: Seed: 4000291170 00:06:27.860 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:27.860 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:27.860 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:27.860 INFO: A corpus is not provided, starting from an empty corpus 00:06:27.860 #2 INITED exec/s: 0 rss: 64Mb 00:06:27.860 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:27.860 This may also happen if the target rejected all inputs we tried so far 00:06:27.860 [2024-07-15 16:23:07.374928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.860 [2024-07-15 16:23:07.374964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.860 [2024-07-15 16:23:07.375088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.860 [2024-07-15 16:23:07.375107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.120 NEW_FUNC[1/694]: 0x48c180 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:06:28.120 NEW_FUNC[2/694]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:28.120 #24 NEW cov: 11915 ft: 11915 corp: 2/24b lim: 45 exec/s: 0 rss: 70Mb L: 23/23 MS: 2 CopyPart-InsertRepeatedBytes- 00:06:28.120 [2024-07-15 16:23:07.705852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.120 [2024-07-15 16:23:07.705889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.120 [2024-07-15 16:23:07.706007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.120 [2024-07-15 16:23:07.706024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.380 NEW_FUNC[1/2]: 0xf91940 in rte_get_tsc_cycles /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/include/rte_cycles.h:61 00:06:28.380 NEW_FUNC[2/2]: 0xf919a0 in rte_rdtsc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/include/rte_cycles.h:31 00:06:28.380 #25 NEW cov: 12047 ft: 12426 corp: 3/47b lim: 45 exec/s: 0 rss: 70Mb L: 23/23 MS: 1 ChangeBinInt- 00:06:28.380 [2024-07-15 16:23:07.765869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.380 [2024-07-15 16:23:07.765898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.380 [2024-07-15 16:23:07.766021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.380 [2024-07-15 16:23:07.766041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.380 #26 NEW cov: 12053 ft: 12734 corp: 4/70b lim: 45 exec/s: 0 rss: 70Mb L: 23/23 MS: 1 ChangeByte- 00:06:28.380 [2024-07-15 16:23:07.826093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.380 [2024-07-15 16:23:07.826119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.380 [2024-07-15 16:23:07.826231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.380 [2024-07-15 16:23:07.826249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.380 #27 NEW cov: 12138 ft: 13046 corp: 5/93b lim: 45 exec/s: 0 rss: 71Mb L: 23/23 MS: 1 CrossOver- 00:06:28.380 [2024-07-15 16:23:07.875954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.380 [2024-07-15 16:23:07.875983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.380 #28 NEW cov: 12138 ft: 13854 corp: 6/109b lim: 45 exec/s: 0 rss: 71Mb L: 16/23 MS: 1 EraseBytes- 00:06:28.380 [2024-07-15 16:23:07.936432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00960000 cdw11:96960000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.380 [2024-07-15 16:23:07.936462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.380 [2024-07-15 16:23:07.936584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.380 [2024-07-15 16:23:07.936602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.380 #29 NEW cov: 12138 ft: 14020 corp: 7/135b lim: 45 exec/s: 0 rss: 71Mb L: 26/26 MS: 1 InsertRepeatedBytes- 00:06:28.640 [2024-07-15 16:23:07.986337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.640 [2024-07-15 16:23:07.986365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.640 #30 NEW cov: 12138 ft: 14127 corp: 8/151b lim: 45 exec/s: 0 rss: 71Mb L: 16/26 MS: 1 ChangeByte- 00:06:28.640 [2024-07-15 16:23:08.046525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.640 [2024-07-15 16:23:08.046552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.640 #31 NEW cov: 12138 ft: 14204 corp: 9/167b lim: 45 exec/s: 0 rss: 71Mb L: 16/26 MS: 1 ShuffleBytes- 00:06:28.640 [2024-07-15 16:23:08.096934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.640 [2024-07-15 16:23:08.096959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.640 [2024-07-15 16:23:08.097079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fffff9ff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.640 [2024-07-15 16:23:08.097096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.640 #32 NEW cov: 12138 ft: 14232 corp: 10/190b lim: 45 exec/s: 0 rss: 71Mb L: 23/26 MS: 1 ChangeBinInt- 00:06:28.640 [2024-07-15 16:23:08.146866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.640 [2024-07-15 16:23:08.146898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.640 #33 NEW cov: 12138 ft: 14286 corp: 11/206b lim: 45 exec/s: 0 rss: 71Mb L: 16/26 MS: 1 CMP- DE: "\001\002\000\000"- 00:06:28.640 [2024-07-15 16:23:08.207029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.640 [2024-07-15 16:23:08.207057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.640 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:28.640 #34 NEW cov: 12161 ft: 14342 corp: 12/222b lim: 45 exec/s: 0 rss: 71Mb L: 16/26 MS: 1 PersAutoDict- DE: "\001\002\000\000"- 00:06:28.900 [2024-07-15 16:23:08.257729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.900 [2024-07-15 16:23:08.257757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.900 [2024-07-15 16:23:08.257886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.900 [2024-07-15 16:23:08.257904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.900 [2024-07-15 16:23:08.258022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.900 [2024-07-15 16:23:08.258051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.900 #35 NEW cov: 12161 ft: 14596 corp: 13/253b lim: 45 exec/s: 0 rss: 71Mb L: 31/31 MS: 1 CopyPart- 00:06:28.900 [2024-07-15 16:23:08.307407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.900 [2024-07-15 16:23:08.307436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.900 #41 NEW cov: 12161 ft: 14621 corp: 14/269b lim: 45 exec/s: 41 rss: 71Mb L: 16/31 MS: 1 ChangeBit- 00:06:28.900 [2024-07-15 16:23:08.377557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00500000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.900 [2024-07-15 16:23:08.377585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.900 #42 NEW cov: 12161 ft: 14712 corp: 15/286b lim: 45 exec/s: 42 rss: 71Mb L: 17/31 MS: 1 InsertByte- 00:06:28.900 [2024-07-15 16:23:08.438091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.900 [2024-07-15 16:23:08.438120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.900 [2024-07-15 16:23:08.438241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fffff9ff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.900 [2024-07-15 16:23:08.438259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.900 #43 NEW cov: 12161 ft: 14738 corp: 16/309b lim: 45 exec/s: 43 rss: 72Mb L: 23/31 MS: 1 ChangeBit- 00:06:29.159 [2024-07-15 16:23:08.497915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.159 [2024-07-15 16:23:08.497944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.159 #44 NEW cov: 12161 ft: 14748 corp: 17/325b lim: 45 exec/s: 44 rss: 72Mb L: 16/31 MS: 1 PersAutoDict- DE: "\001\002\000\000"- 00:06:29.159 [2024-07-15 16:23:08.558678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.159 [2024-07-15 16:23:08.558707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.159 [2024-07-15 16:23:08.558836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.159 [2024-07-15 16:23:08.558855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.159 [2024-07-15 16:23:08.558974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.159 [2024-07-15 16:23:08.558991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.159 #45 NEW cov: 12161 ft: 14766 corp: 18/358b lim: 45 exec/s: 45 rss: 72Mb L: 33/33 MS: 1 CrossOver- 00:06:29.159 [2024-07-15 16:23:08.608195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.159 [2024-07-15 16:23:08.608225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.159 #46 NEW cov: 12161 ft: 14780 corp: 19/369b lim: 45 exec/s: 46 rss: 72Mb L: 11/33 MS: 1 EraseBytes- 00:06:29.159 [2024-07-15 16:23:08.668618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.159 [2024-07-15 16:23:08.668648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.159 [2024-07-15 16:23:08.668766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.159 [2024-07-15 16:23:08.668785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.159 #47 NEW cov: 12161 ft: 14800 corp: 20/392b lim: 45 exec/s: 47 rss: 72Mb L: 23/33 MS: 1 CrossOver- 00:06:29.159 [2024-07-15 16:23:08.718565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.159 [2024-07-15 16:23:08.718591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.159 #48 NEW cov: 12161 ft: 14874 corp: 21/408b lim: 45 exec/s: 48 rss: 72Mb L: 16/33 MS: 1 CrossOver- 00:06:29.418 [2024-07-15 16:23:08.769251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00960000 cdw11:96960000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.418 [2024-07-15 16:23:08.769277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.418 [2024-07-15 16:23:08.769413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.418 [2024-07-15 16:23:08.769427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.418 [2024-07-15 16:23:08.769544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.418 [2024-07-15 16:23:08.769563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.418 #49 NEW cov: 12161 ft: 14896 corp: 22/438b lim: 45 exec/s: 49 rss: 72Mb L: 30/33 MS: 1 PersAutoDict- DE: "\001\002\000\000"- 00:06:29.418 [2024-07-15 16:23:08.829204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00500000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.418 [2024-07-15 16:23:08.829231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.418 [2024-07-15 16:23:08.829365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00240000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.418 [2024-07-15 16:23:08.829382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.418 #50 NEW cov: 12161 ft: 14974 corp: 23/463b lim: 45 exec/s: 50 rss: 72Mb L: 25/33 MS: 1 CopyPart- 00:06:29.418 [2024-07-15 16:23:08.889110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.418 [2024-07-15 16:23:08.889136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.418 #51 NEW cov: 12161 ft: 14993 corp: 24/474b lim: 45 exec/s: 51 rss: 72Mb L: 11/33 MS: 1 ShuffleBytes- 00:06:29.418 [2024-07-15 16:23:08.950067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.418 [2024-07-15 16:23:08.950094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.418 [2024-07-15 16:23:08.950198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:01020000 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.418 [2024-07-15 16:23:08.950216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.418 [2024-07-15 16:23:08.950341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.418 [2024-07-15 16:23:08.950358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.418 [2024-07-15 16:23:08.950478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.418 [2024-07-15 16:23:08.950494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.418 #52 NEW cov: 12161 ft: 15330 corp: 25/516b lim: 45 exec/s: 52 rss: 72Mb L: 42/42 MS: 1 InsertRepeatedBytes- 00:06:29.419 [2024-07-15 16:23:08.999415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.419 [2024-07-15 16:23:08.999445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.678 #53 NEW cov: 12161 ft: 15417 corp: 26/529b lim: 45 exec/s: 53 rss: 72Mb L: 13/42 MS: 1 CopyPart- 00:06:29.678 [2024-07-15 16:23:09.059923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.678 [2024-07-15 16:23:09.059952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.678 [2024-07-15 16:23:09.060074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fffff9ff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.678 [2024-07-15 16:23:09.060093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.678 #54 NEW cov: 12161 ft: 15433 corp: 27/552b lim: 45 exec/s: 54 rss: 72Mb L: 23/42 MS: 1 ChangeBit- 00:06:29.678 [2024-07-15 16:23:09.110413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:92920092 cdw11:92920004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.678 [2024-07-15 16:23:09.110445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.678 [2024-07-15 16:23:09.110574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:92929292 cdw11:92920004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.678 [2024-07-15 16:23:09.110592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.678 [2024-07-15 16:23:09.110702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.678 [2024-07-15 16:23:09.110719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.678 #55 NEW cov: 12161 ft: 15462 corp: 28/585b lim: 45 exec/s: 55 rss: 73Mb L: 33/42 MS: 1 InsertRepeatedBytes- 00:06:29.678 [2024-07-15 16:23:09.169996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:005b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.678 [2024-07-15 16:23:09.170022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.678 #56 NEW cov: 12161 ft: 15468 corp: 29/598b lim: 45 exec/s: 56 rss: 73Mb L: 13/42 MS: 1 ChangeByte- 00:06:29.678 [2024-07-15 16:23:09.230473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:002f0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.678 [2024-07-15 16:23:09.230500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.678 [2024-07-15 16:23:09.230612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.678 [2024-07-15 16:23:09.230629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.678 #57 NEW cov: 12161 ft: 15496 corp: 30/621b lim: 45 exec/s: 57 rss: 73Mb L: 23/42 MS: 1 ChangeByte- 00:06:29.937 [2024-07-15 16:23:09.280590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.937 [2024-07-15 16:23:09.280616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.937 [2024-07-15 16:23:09.280734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:002b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.937 [2024-07-15 16:23:09.280752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.937 #58 NEW cov: 12161 ft: 15512 corp: 31/645b lim: 45 exec/s: 58 rss: 73Mb L: 24/42 MS: 1 InsertByte- 00:06:29.937 [2024-07-15 16:23:09.330771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.937 [2024-07-15 16:23:09.330796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.937 [2024-07-15 16:23:09.330912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:96009696 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.937 [2024-07-15 16:23:09.330931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.937 #64 pulse cov: 12161 ft: 15528 corp: 31/645b lim: 45 exec/s: 32 rss: 73Mb 00:06:29.937 #64 NEW cov: 12161 ft: 15528 corp: 32/670b lim: 45 exec/s: 32 rss: 73Mb L: 25/42 MS: 1 CrossOver- 00:06:29.937 #64 DONE cov: 12161 ft: 15528 corp: 32/670b lim: 45 exec/s: 32 rss: 73Mb 00:06:29.937 ###### Recommended dictionary. ###### 00:06:29.937 "\001\002\000\000" # Uses: 3 00:06:29.937 ###### End of recommended dictionary. ###### 00:06:29.937 Done 64 runs in 2 second(s) 00:06:29.937 16:23:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:06:29.937 16:23:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:29.937 16:23:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:29.937 16:23:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:06:29.937 16:23:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:06:29.937 16:23:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:29.937 16:23:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:29.937 16:23:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:29.937 16:23:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:06:29.937 16:23:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:29.937 16:23:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:29.937 16:23:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:06:29.937 16:23:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:06:29.937 16:23:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:29.937 16:23:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:06:29.937 16:23:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:29.937 16:23:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:29.937 16:23:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:29.937 16:23:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:06:29.937 [2024-07-15 16:23:09.519113] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:29.937 [2024-07-15 16:23:09.519184] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2024759 ] 00:06:30.197 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.197 [2024-07-15 16:23:09.694998] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.197 [2024-07-15 16:23:09.761481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.456 [2024-07-15 16:23:09.820720] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:30.456 [2024-07-15 16:23:09.836976] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:06:30.456 INFO: Running with entropic power schedule (0xFF, 100). 00:06:30.456 INFO: Seed: 2244311158 00:06:30.456 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:30.456 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:30.456 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:30.456 INFO: A corpus is not provided, starting from an empty corpus 00:06:30.456 #2 INITED exec/s: 0 rss: 63Mb 00:06:30.456 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:30.456 This may also happen if the target rejected all inputs we tried so far 00:06:30.456 [2024-07-15 16:23:09.902911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000200a cdw11:00000000 00:06:30.456 [2024-07-15 16:23:09.902949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.716 NEW_FUNC[1/694]: 0x48e990 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:06:30.716 NEW_FUNC[2/694]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:30.716 #6 NEW cov: 11834 ft: 11835 corp: 2/3b lim: 10 exec/s: 0 rss: 70Mb L: 2/2 MS: 4 ShuffleBytes-ChangeByte-ChangeBit-CrossOver- 00:06:30.716 [2024-07-15 16:23:10.254059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:30.716 [2024-07-15 16:23:10.254109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.716 #7 NEW cov: 11964 ft: 12380 corp: 3/5b lim: 10 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CopyPart- 00:06:30.716 [2024-07-15 16:23:10.294577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003200 cdw11:00000000 00:06:30.716 [2024-07-15 16:23:10.294605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.716 [2024-07-15 16:23:10.294715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:30.716 [2024-07-15 16:23:10.294731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.716 [2024-07-15 16:23:10.294842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:30.716 [2024-07-15 16:23:10.294859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.716 [2024-07-15 16:23:10.294966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:30.716 [2024-07-15 16:23:10.294983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.975 #9 NEW cov: 11970 ft: 12941 corp: 4/14b lim: 10 exec/s: 0 rss: 70Mb L: 9/9 MS: 2 ChangeByte-CMP- DE: "\000\000\000\000\000\000\000\017"- 00:06:30.975 [2024-07-15 16:23:10.334687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003200 cdw11:00000000 00:06:30.975 [2024-07-15 16:23:10.334713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.975 [2024-07-15 16:23:10.334821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:30.975 [2024-07-15 16:23:10.334837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.975 [2024-07-15 16:23:10.334947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:30.975 [2024-07-15 16:23:10.334963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.975 [2024-07-15 16:23:10.335078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:30.975 [2024-07-15 16:23:10.335095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.975 #15 NEW cov: 12055 ft: 13158 corp: 5/23b lim: 10 exec/s: 0 rss: 70Mb L: 9/9 MS: 1 ShuffleBytes- 00:06:30.975 [2024-07-15 16:23:10.384782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:30.975 [2024-07-15 16:23:10.384810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.975 [2024-07-15 16:23:10.384918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:30.975 [2024-07-15 16:23:10.384936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.975 [2024-07-15 16:23:10.385049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:30.975 [2024-07-15 16:23:10.385068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.975 [2024-07-15 16:23:10.385180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:30.975 [2024-07-15 16:23:10.385198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.975 #16 NEW cov: 12055 ft: 13370 corp: 6/31b lim: 10 exec/s: 0 rss: 70Mb L: 8/9 MS: 1 InsertRepeatedBytes- 00:06:30.975 [2024-07-15 16:23:10.424953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003200 cdw11:00000000 00:06:30.975 [2024-07-15 16:23:10.424980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.975 [2024-07-15 16:23:10.425106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:30.975 [2024-07-15 16:23:10.425123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.975 [2024-07-15 16:23:10.425223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:30.975 [2024-07-15 16:23:10.425241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.975 [2024-07-15 16:23:10.425343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:30.975 [2024-07-15 16:23:10.425360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.975 #17 NEW cov: 12055 ft: 13541 corp: 7/40b lim: 10 exec/s: 0 rss: 70Mb L: 9/9 MS: 1 ShuffleBytes- 00:06:30.975 [2024-07-15 16:23:10.474602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000200a cdw11:00000000 00:06:30.975 [2024-07-15 16:23:10.474628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.975 #18 NEW cov: 12055 ft: 13617 corp: 8/42b lim: 10 exec/s: 0 rss: 70Mb L: 2/9 MS: 1 CopyPart- 00:06:30.975 [2024-07-15 16:23:10.524686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000200a cdw11:00000000 00:06:30.975 [2024-07-15 16:23:10.524714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.975 #19 NEW cov: 12055 ft: 13646 corp: 9/44b lim: 10 exec/s: 0 rss: 71Mb L: 2/9 MS: 1 ShuffleBytes- 00:06:31.234 [2024-07-15 16:23:10.574829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e20 cdw11:00000000 00:06:31.234 [2024-07-15 16:23:10.574855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.234 #20 NEW cov: 12055 ft: 13661 corp: 10/47b lim: 10 exec/s: 0 rss: 71Mb L: 3/9 MS: 1 InsertByte- 00:06:31.234 [2024-07-15 16:23:10.614998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000b0a cdw11:00000000 00:06:31.234 [2024-07-15 16:23:10.615024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.234 #21 NEW cov: 12055 ft: 13696 corp: 11/49b lim: 10 exec/s: 0 rss: 71Mb L: 2/9 MS: 1 ChangeBit- 00:06:31.234 [2024-07-15 16:23:10.665164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002014 cdw11:00000000 00:06:31.234 [2024-07-15 16:23:10.665191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.234 #22 NEW cov: 12055 ft: 13738 corp: 12/51b lim: 10 exec/s: 0 rss: 71Mb L: 2/9 MS: 1 ChangeBinInt- 00:06:31.234 [2024-07-15 16:23:10.705991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:31.234 [2024-07-15 16:23:10.706017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.234 [2024-07-15 16:23:10.706137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:31.234 [2024-07-15 16:23:10.706156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.234 [2024-07-15 16:23:10.706263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:31.234 [2024-07-15 16:23:10.706281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.234 [2024-07-15 16:23:10.706388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 00:06:31.234 [2024-07-15 16:23:10.706405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.234 [2024-07-15 16:23:10.706517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00002014 cdw11:00000000 00:06:31.234 [2024-07-15 16:23:10.706533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:31.235 #23 NEW cov: 12055 ft: 13807 corp: 13/61b lim: 10 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\017"- 00:06:31.235 [2024-07-15 16:23:10.756220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:31.235 [2024-07-15 16:23:10.756245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.235 [2024-07-15 16:23:10.756363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:31.235 [2024-07-15 16:23:10.756380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.235 [2024-07-15 16:23:10.756501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:31.235 [2024-07-15 16:23:10.756518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.235 [2024-07-15 16:23:10.756631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:31.235 [2024-07-15 16:23:10.756649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.235 [2024-07-15 16:23:10.756762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:31.235 [2024-07-15 16:23:10.756778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:31.235 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:31.235 #24 NEW cov: 12078 ft: 13859 corp: 14/71b lim: 10 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 CopyPart- 00:06:31.235 [2024-07-15 16:23:10.806160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 00:06:31.235 [2024-07-15 16:23:10.806187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.235 [2024-07-15 16:23:10.806307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:31.235 [2024-07-15 16:23:10.806325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.235 [2024-07-15 16:23:10.806448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:31.235 [2024-07-15 16:23:10.806478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.235 [2024-07-15 16:23:10.806598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:31.235 [2024-07-15 16:23:10.806616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.235 #26 NEW cov: 12078 ft: 13881 corp: 15/80b lim: 10 exec/s: 0 rss: 71Mb L: 9/10 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:31.494 [2024-07-15 16:23:10.845679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 00:06:31.494 [2024-07-15 16:23:10.845705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.494 #27 NEW cov: 12078 ft: 13979 corp: 16/82b lim: 10 exec/s: 0 rss: 71Mb L: 2/10 MS: 1 ChangeBit- 00:06:31.494 [2024-07-15 16:23:10.896576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a00 cdw11:00000000 00:06:31.494 [2024-07-15 16:23:10.896604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.494 [2024-07-15 16:23:10.896725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:31.494 [2024-07-15 16:23:10.896744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.494 [2024-07-15 16:23:10.896850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:31.494 [2024-07-15 16:23:10.896867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.494 [2024-07-15 16:23:10.896977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:31.494 [2024-07-15 16:23:10.896996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.494 [2024-07-15 16:23:10.897119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:31.494 [2024-07-15 16:23:10.897134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:31.494 #28 NEW cov: 12078 ft: 13992 corp: 17/92b lim: 10 exec/s: 28 rss: 71Mb L: 10/10 MS: 1 ChangeBit- 00:06:31.494 [2024-07-15 16:23:10.946563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000200a cdw11:00000000 00:06:31.494 [2024-07-15 16:23:10.946590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.494 [2024-07-15 16:23:10.946714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000f5f5 cdw11:00000000 00:06:31.494 [2024-07-15 16:23:10.946734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.494 [2024-07-15 16:23:10.946845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000f5f5 cdw11:00000000 00:06:31.494 [2024-07-15 16:23:10.946861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.494 [2024-07-15 16:23:10.946980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000f5f5 cdw11:00000000 00:06:31.494 [2024-07-15 16:23:10.946999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.494 #29 NEW cov: 12078 ft: 14014 corp: 18/100b lim: 10 exec/s: 29 rss: 71Mb L: 8/10 MS: 1 InsertRepeatedBytes- 00:06:31.494 [2024-07-15 16:23:10.986887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:31.494 [2024-07-15 16:23:10.986913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.494 [2024-07-15 16:23:10.987026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:31.494 [2024-07-15 16:23:10.987046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.494 [2024-07-15 16:23:10.987155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:31.494 [2024-07-15 16:23:10.987174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.494 [2024-07-15 16:23:10.987289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000000f1 cdw11:00000000 00:06:31.494 [2024-07-15 16:23:10.987306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.494 [2024-07-15 16:23:10.987420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000d614 cdw11:00000000 00:06:31.494 [2024-07-15 16:23:10.987437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:31.494 #30 NEW cov: 12078 ft: 14037 corp: 19/110b lim: 10 exec/s: 30 rss: 71Mb L: 10/10 MS: 1 ChangeBinInt- 00:06:31.494 [2024-07-15 16:23:11.036262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e20 cdw11:00000000 00:06:31.494 [2024-07-15 16:23:11.036288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.494 #31 NEW cov: 12078 ft: 14045 corp: 20/113b lim: 10 exec/s: 31 rss: 72Mb L: 3/10 MS: 1 ChangeBit- 00:06:31.494 [2024-07-15 16:23:11.086447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000202e cdw11:00000000 00:06:31.494 [2024-07-15 16:23:11.086489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.753 #32 NEW cov: 12078 ft: 14074 corp: 21/116b lim: 10 exec/s: 32 rss: 72Mb L: 3/10 MS: 1 InsertByte- 00:06:31.753 [2024-07-15 16:23:11.126755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000202e cdw11:00000000 00:06:31.753 [2024-07-15 16:23:11.126781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.753 [2024-07-15 16:23:11.126897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000aec cdw11:00000000 00:06:31.753 [2024-07-15 16:23:11.126915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.753 #33 NEW cov: 12078 ft: 14329 corp: 22/120b lim: 10 exec/s: 33 rss: 72Mb L: 4/10 MS: 1 InsertByte- 00:06:31.753 [2024-07-15 16:23:11.177231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007e00 cdw11:00000000 00:06:31.753 [2024-07-15 16:23:11.177257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.753 [2024-07-15 16:23:11.177374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:31.753 [2024-07-15 16:23:11.177390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.753 [2024-07-15 16:23:11.177499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:31.753 [2024-07-15 16:23:11.177517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.753 [2024-07-15 16:23:11.177631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:31.753 [2024-07-15 16:23:11.177649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.753 #37 NEW cov: 12078 ft: 14342 corp: 23/129b lim: 10 exec/s: 37 rss: 72Mb L: 9/10 MS: 4 EraseBytes-ChangeByte-ChangeByte-InsertRepeatedBytes- 00:06:31.753 [2024-07-15 16:23:11.226726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 00:06:31.753 [2024-07-15 16:23:11.226755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.753 #38 NEW cov: 12078 ft: 14376 corp: 24/132b lim: 10 exec/s: 38 rss: 72Mb L: 3/10 MS: 1 CrossOver- 00:06:31.753 [2024-07-15 16:23:11.267665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003200 cdw11:00000000 00:06:31.753 [2024-07-15 16:23:11.267692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.753 [2024-07-15 16:23:11.267811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:31.753 [2024-07-15 16:23:11.267829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.753 [2024-07-15 16:23:11.267943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:31.753 [2024-07-15 16:23:11.267961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.753 [2024-07-15 16:23:11.268070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 00:06:31.753 [2024-07-15 16:23:11.268087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.753 [2024-07-15 16:23:11.268186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 00:06:31.753 [2024-07-15 16:23:11.268204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:31.753 #39 NEW cov: 12078 ft: 14390 corp: 25/142b lim: 10 exec/s: 39 rss: 72Mb L: 10/10 MS: 1 CrossOver- 00:06:31.753 [2024-07-15 16:23:11.307023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000202e cdw11:00000000 00:06:31.753 [2024-07-15 16:23:11.307051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.753 #40 NEW cov: 12078 ft: 14424 corp: 26/145b lim: 10 exec/s: 40 rss: 72Mb L: 3/10 MS: 1 EraseBytes- 00:06:32.012 [2024-07-15 16:23:11.357736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:32.012 [2024-07-15 16:23:11.357763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.012 [2024-07-15 16:23:11.357885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:32.012 [2024-07-15 16:23:11.357914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.012 [2024-07-15 16:23:11.358026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:32.012 [2024-07-15 16:23:11.358045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.012 [2024-07-15 16:23:11.358157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000003ff cdw11:00000000 00:06:32.012 [2024-07-15 16:23:11.358176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.012 #41 NEW cov: 12078 ft: 14438 corp: 27/154b lim: 10 exec/s: 41 rss: 72Mb L: 9/10 MS: 1 CMP- DE: "\000\000\000\000\000\000\003\377"- 00:06:32.012 [2024-07-15 16:23:11.407876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ceff cdw11:00000000 00:06:32.012 [2024-07-15 16:23:11.407905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.012 [2024-07-15 16:23:11.408021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fff5 cdw11:00000000 00:06:32.012 [2024-07-15 16:23:11.408039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.012 [2024-07-15 16:23:11.408157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:32.013 [2024-07-15 16:23:11.408174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.013 [2024-07-15 16:23:11.408289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:32.013 [2024-07-15 16:23:11.408308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.013 #42 NEW cov: 12078 ft: 14450 corp: 28/163b lim: 10 exec/s: 42 rss: 72Mb L: 9/10 MS: 1 ChangeBinInt- 00:06:32.013 [2024-07-15 16:23:11.448043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003200 cdw11:00000000 00:06:32.013 [2024-07-15 16:23:11.448072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.013 [2024-07-15 16:23:11.448185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:32.013 [2024-07-15 16:23:11.448203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.013 [2024-07-15 16:23:11.448318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:32.013 [2024-07-15 16:23:11.448338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.013 [2024-07-15 16:23:11.448455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:32.013 [2024-07-15 16:23:11.448472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.013 #43 NEW cov: 12078 ft: 14464 corp: 29/172b lim: 10 exec/s: 43 rss: 72Mb L: 9/10 MS: 1 CopyPart- 00:06:32.013 [2024-07-15 16:23:11.497607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000a00a cdw11:00000000 00:06:32.013 [2024-07-15 16:23:11.497633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.013 #44 NEW cov: 12078 ft: 14470 corp: 30/174b lim: 10 exec/s: 44 rss: 72Mb L: 2/10 MS: 1 ChangeBit- 00:06:32.013 [2024-07-15 16:23:11.538290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003200 cdw11:00000000 00:06:32.013 [2024-07-15 16:23:11.538319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.013 [2024-07-15 16:23:11.538433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:32.013 [2024-07-15 16:23:11.538456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.013 [2024-07-15 16:23:11.538569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:32.013 [2024-07-15 16:23:11.538586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.013 [2024-07-15 16:23:11.538699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 00:06:32.013 [2024-07-15 16:23:11.538715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.013 #45 NEW cov: 12078 ft: 14492 corp: 31/183b lim: 10 exec/s: 45 rss: 72Mb L: 9/10 MS: 1 EraseBytes- 00:06:32.013 [2024-07-15 16:23:11.588416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 00:06:32.013 [2024-07-15 16:23:11.588448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.013 [2024-07-15 16:23:11.588568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:32.013 [2024-07-15 16:23:11.588584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.013 [2024-07-15 16:23:11.588697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:32.013 [2024-07-15 16:23:11.588715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.013 [2024-07-15 16:23:11.588827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:32.013 [2024-07-15 16:23:11.588845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.271 #46 NEW cov: 12078 ft: 14507 corp: 32/192b lim: 10 exec/s: 46 rss: 73Mb L: 9/10 MS: 1 ChangeBinInt- 00:06:32.271 [2024-07-15 16:23:11.638797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002aa0 cdw11:00000000 00:06:32.271 [2024-07-15 16:23:11.638824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.271 [2024-07-15 16:23:11.638941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:32.272 [2024-07-15 16:23:11.638957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.272 [2024-07-15 16:23:11.639064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:32.272 [2024-07-15 16:23:11.639081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.272 [2024-07-15 16:23:11.639199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:32.272 [2024-07-15 16:23:11.639218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.272 [2024-07-15 16:23:11.639328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:32.272 [2024-07-15 16:23:11.639346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.272 #47 NEW cov: 12078 ft: 14510 corp: 33/202b lim: 10 exec/s: 47 rss: 73Mb L: 10/10 MS: 1 CrossOver- 00:06:32.272 [2024-07-15 16:23:11.688956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a00 cdw11:00000000 00:06:32.272 [2024-07-15 16:23:11.688984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.272 [2024-07-15 16:23:11.689098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:32.272 [2024-07-15 16:23:11.689115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.272 [2024-07-15 16:23:11.689226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:32.272 [2024-07-15 16:23:11.689244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.272 [2024-07-15 16:23:11.689365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 00:06:32.272 [2024-07-15 16:23:11.689384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.272 [2024-07-15 16:23:11.689497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff00 cdw11:00000000 00:06:32.272 [2024-07-15 16:23:11.689514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.272 #48 NEW cov: 12078 ft: 14514 corp: 34/212b lim: 10 exec/s: 48 rss: 73Mb L: 10/10 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\003\377"- 00:06:32.272 [2024-07-15 16:23:11.738154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000530d cdw11:00000000 00:06:32.272 [2024-07-15 16:23:11.738181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.272 #53 NEW cov: 12078 ft: 14521 corp: 35/214b lim: 10 exec/s: 53 rss: 73Mb L: 2/10 MS: 5 EraseBytes-ShuffleBytes-ShuffleBytes-ChangeBinInt-InsertByte- 00:06:32.272 [2024-07-15 16:23:11.789226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a00 cdw11:00000000 00:06:32.272 [2024-07-15 16:23:11.789252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.272 [2024-07-15 16:23:11.789364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:32.272 [2024-07-15 16:23:11.789383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.272 [2024-07-15 16:23:11.789503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 00:06:32.272 [2024-07-15 16:23:11.789520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.272 [2024-07-15 16:23:11.789638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 00:06:32.272 [2024-07-15 16:23:11.789657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.272 [2024-07-15 16:23:11.789777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff00 cdw11:00000000 00:06:32.272 [2024-07-15 16:23:11.789794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.272 [2024-07-15 16:23:11.839325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a00 cdw11:00000000 00:06:32.272 [2024-07-15 16:23:11.839352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.272 [2024-07-15 16:23:11.839457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000e00 cdw11:00000000 00:06:32.272 [2024-07-15 16:23:11.839486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.272 [2024-07-15 16:23:11.839605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000300 cdw11:00000000 00:06:32.272 [2024-07-15 16:23:11.839623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.272 [2024-07-15 16:23:11.839737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:32.272 [2024-07-15 16:23:11.839756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.272 [2024-07-15 16:23:11.839869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff00 cdw11:00000000 00:06:32.272 [2024-07-15 16:23:11.839887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.272 #55 NEW cov: 12078 ft: 14530 corp: 36/224b lim: 10 exec/s: 55 rss: 73Mb L: 10/10 MS: 2 CrossOver-ShuffleBytes- 00:06:32.531 [2024-07-15 16:23:11.879292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:32.531 [2024-07-15 16:23:11.879318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.531 [2024-07-15 16:23:11.879432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:32.531 [2024-07-15 16:23:11.879454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.531 [2024-07-15 16:23:11.879562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:32.531 [2024-07-15 16:23:11.879579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.531 [2024-07-15 16:23:11.879693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 00:06:32.531 [2024-07-15 16:23:11.879708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.531 #57 NEW cov: 12078 ft: 14571 corp: 37/233b lim: 10 exec/s: 28 rss: 73Mb L: 9/10 MS: 2 CopyPart-PersAutoDict- DE: "\000\000\000\000\000\000\003\377"- 00:06:32.531 #57 DONE cov: 12078 ft: 14571 corp: 37/233b lim: 10 exec/s: 28 rss: 73Mb 00:06:32.531 ###### Recommended dictionary. ###### 00:06:32.531 "\000\000\000\000\000\000\000\017" # Uses: 1 00:06:32.531 "\000\000\000\000\000\000\003\377" # Uses: 2 00:06:32.531 ###### End of recommended dictionary. ###### 00:06:32.531 Done 57 runs in 2 second(s) 00:06:32.531 16:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:06:32.531 16:23:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:32.531 16:23:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:32.531 16:23:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:06:32.531 16:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:06:32.531 16:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:32.531 16:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:32.531 16:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:32.531 16:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:06:32.531 16:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:32.531 16:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:32.531 16:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:06:32.531 16:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:06:32.531 16:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:32.531 16:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:06:32.531 16:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:32.531 16:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:32.531 16:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:32.531 16:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:06:32.531 [2024-07-15 16:23:12.065742] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:32.531 [2024-07-15 16:23:12.065811] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2025289 ] 00:06:32.531 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.790 [2024-07-15 16:23:12.246245] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.790 [2024-07-15 16:23:12.311498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.790 [2024-07-15 16:23:12.370500] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.049 [2024-07-15 16:23:12.386784] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:06:33.049 INFO: Running with entropic power schedule (0xFF, 100). 00:06:33.049 INFO: Seed: 499342529 00:06:33.049 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:33.049 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:33.049 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:33.049 INFO: A corpus is not provided, starting from an empty corpus 00:06:33.049 #2 INITED exec/s: 0 rss: 65Mb 00:06:33.049 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:33.049 This may also happen if the target rejected all inputs we tried so far 00:06:33.049 [2024-07-15 16:23:12.442194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:33.049 [2024-07-15 16:23:12.442223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.049 [2024-07-15 16:23:12.442276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:33.049 [2024-07-15 16:23:12.442290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.049 [2024-07-15 16:23:12.442337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:33.049 [2024-07-15 16:23:12.442350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.309 NEW_FUNC[1/693]: 0x48f380 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:06:33.309 NEW_FUNC[2/693]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:33.309 #3 NEW cov: 11827 ft: 11834 corp: 2/8b lim: 10 exec/s: 0 rss: 71Mb L: 7/7 MS: 1 InsertRepeatedBytes- 00:06:33.309 [2024-07-15 16:23:12.762805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:33.309 [2024-07-15 16:23:12.762838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.309 NEW_FUNC[1/1]: 0x133fc60 in nvmf_transport_poll_group_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:727 00:06:33.309 #4 NEW cov: 11964 ft: 12773 corp: 3/10b lim: 10 exec/s: 0 rss: 71Mb L: 2/7 MS: 1 CopyPart- 00:06:33.309 [2024-07-15 16:23:12.802844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:33.309 [2024-07-15 16:23:12.802870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.309 #5 NEW cov: 11970 ft: 13018 corp: 4/13b lim: 10 exec/s: 0 rss: 71Mb L: 3/7 MS: 1 InsertByte- 00:06:33.309 [2024-07-15 16:23:12.853212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.309 [2024-07-15 16:23:12.853238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.309 [2024-07-15 16:23:12.853292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.309 [2024-07-15 16:23:12.853305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.309 [2024-07-15 16:23:12.853356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 00:06:33.309 [2024-07-15 16:23:12.853369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.309 #6 NEW cov: 12055 ft: 13253 corp: 5/19b lim: 10 exec/s: 0 rss: 71Mb L: 6/7 MS: 1 InsertRepeatedBytes- 00:06:33.309 [2024-07-15 16:23:12.893310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:33.309 [2024-07-15 16:23:12.893336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.309 [2024-07-15 16:23:12.893404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:33.309 [2024-07-15 16:23:12.893418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.309 [2024-07-15 16:23:12.893472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000bfff cdw11:00000000 00:06:33.309 [2024-07-15 16:23:12.893486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.568 #7 NEW cov: 12055 ft: 13314 corp: 6/26b lim: 10 exec/s: 0 rss: 72Mb L: 7/7 MS: 1 ChangeBit- 00:06:33.568 [2024-07-15 16:23:12.943614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:33.568 [2024-07-15 16:23:12.943639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.568 [2024-07-15 16:23:12.943692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:33.568 [2024-07-15 16:23:12.943706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.568 [2024-07-15 16:23:12.943754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000bfff cdw11:00000000 00:06:33.568 [2024-07-15 16:23:12.943767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.568 [2024-07-15 16:23:12.943816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff0a cdw11:00000000 00:06:33.568 [2024-07-15 16:23:12.943829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.568 #8 NEW cov: 12055 ft: 13552 corp: 7/34b lim: 10 exec/s: 0 rss: 72Mb L: 8/8 MS: 1 CrossOver- 00:06:33.568 [2024-07-15 16:23:12.993633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.568 [2024-07-15 16:23:12.993658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.568 [2024-07-15 16:23:12.993726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.568 [2024-07-15 16:23:12.993740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.568 [2024-07-15 16:23:12.993794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000074 cdw11:00000000 00:06:33.568 [2024-07-15 16:23:12.993806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.568 #9 NEW cov: 12055 ft: 13604 corp: 8/40b lim: 10 exec/s: 0 rss: 72Mb L: 6/8 MS: 1 ChangeByte- 00:06:33.568 [2024-07-15 16:23:13.043968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.568 [2024-07-15 16:23:13.043993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.568 [2024-07-15 16:23:13.044044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.568 [2024-07-15 16:23:13.044057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.568 [2024-07-15 16:23:13.044107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.568 [2024-07-15 16:23:13.044135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.568 [2024-07-15 16:23:13.044187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.568 [2024-07-15 16:23:13.044201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.568 [2024-07-15 16:23:13.044251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000074 cdw11:00000000 00:06:33.568 [2024-07-15 16:23:13.044265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:33.568 #10 NEW cov: 12055 ft: 13656 corp: 9/50b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:06:33.568 [2024-07-15 16:23:13.093869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.568 [2024-07-15 16:23:13.093893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.568 [2024-07-15 16:23:13.093959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000000f8 cdw11:00000000 00:06:33.568 [2024-07-15 16:23:13.093972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.568 [2024-07-15 16:23:13.094023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff0a cdw11:00000000 00:06:33.568 [2024-07-15 16:23:13.094036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.568 #11 NEW cov: 12055 ft: 13741 corp: 10/56b lim: 10 exec/s: 0 rss: 72Mb L: 6/10 MS: 1 ChangeBinInt- 00:06:33.568 [2024-07-15 16:23:13.133980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:33.568 [2024-07-15 16:23:13.134005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.568 [2024-07-15 16:23:13.134074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:33.568 [2024-07-15 16:23:13.134087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.568 [2024-07-15 16:23:13.134138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:33.568 [2024-07-15 16:23:13.134151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.568 #12 NEW cov: 12055 ft: 13825 corp: 11/63b lim: 10 exec/s: 0 rss: 72Mb L: 7/10 MS: 1 ShuffleBytes- 00:06:33.840 [2024-07-15 16:23:13.173885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000400a cdw11:00000000 00:06:33.840 [2024-07-15 16:23:13.173910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.840 #17 NEW cov: 12055 ft: 13877 corp: 12/65b lim: 10 exec/s: 0 rss: 72Mb L: 2/10 MS: 5 CopyPart-CrossOver-ChangeByte-ShuffleBytes-CrossOver- 00:06:33.840 [2024-07-15 16:23:13.214199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000600 cdw11:00000000 00:06:33.840 [2024-07-15 16:23:13.214223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.840 [2024-07-15 16:23:13.214276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.840 [2024-07-15 16:23:13.214289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.840 [2024-07-15 16:23:13.214340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 00:06:33.840 [2024-07-15 16:23:13.214352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.840 #18 NEW cov: 12055 ft: 13898 corp: 13/71b lim: 10 exec/s: 0 rss: 72Mb L: 6/10 MS: 1 CMP- DE: "\006\000"- 00:06:33.840 [2024-07-15 16:23:13.254301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.840 [2024-07-15 16:23:13.254326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.840 [2024-07-15 16:23:13.254397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.840 [2024-07-15 16:23:13.254410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.840 [2024-07-15 16:23:13.254459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.840 [2024-07-15 16:23:13.254472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.840 #19 NEW cov: 12055 ft: 13914 corp: 14/78b lim: 10 exec/s: 0 rss: 72Mb L: 7/10 MS: 1 CopyPart- 00:06:33.840 [2024-07-15 16:23:13.294400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.840 [2024-07-15 16:23:13.294425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.840 [2024-07-15 16:23:13.294493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 00:06:33.840 [2024-07-15 16:23:13.294508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.840 [2024-07-15 16:23:13.294557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.840 [2024-07-15 16:23:13.294571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.840 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:33.840 #20 NEW cov: 12078 ft: 13957 corp: 15/85b lim: 10 exec/s: 0 rss: 72Mb L: 7/10 MS: 1 PersAutoDict- DE: "\006\000"- 00:06:33.840 [2024-07-15 16:23:13.344816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.840 [2024-07-15 16:23:13.344841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.840 [2024-07-15 16:23:13.344892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.840 [2024-07-15 16:23:13.344908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.840 [2024-07-15 16:23:13.344960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.840 [2024-07-15 16:23:13.344973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.840 [2024-07-15 16:23:13.345023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000026 cdw11:00000000 00:06:33.840 [2024-07-15 16:23:13.345035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.840 [2024-07-15 16:23:13.345085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000074 cdw11:00000000 00:06:33.840 [2024-07-15 16:23:13.345098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:33.840 #21 NEW cov: 12078 ft: 13977 corp: 16/95b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 ChangeByte- 00:06:33.840 [2024-07-15 16:23:13.394707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:33.840 [2024-07-15 16:23:13.394731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.840 [2024-07-15 16:23:13.394796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000023ff cdw11:00000000 00:06:33.840 [2024-07-15 16:23:13.394809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.840 [2024-07-15 16:23:13.394856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:33.840 [2024-07-15 16:23:13.394869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.840 #22 NEW cov: 12078 ft: 13985 corp: 17/102b lim: 10 exec/s: 22 rss: 72Mb L: 7/10 MS: 1 ChangeByte- 00:06:34.103 [2024-07-15 16:23:13.445055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.103 [2024-07-15 16:23:13.445080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.103 [2024-07-15 16:23:13.445133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.103 [2024-07-15 16:23:13.445146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.103 [2024-07-15 16:23:13.445197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.103 [2024-07-15 16:23:13.445210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.103 [2024-07-15 16:23:13.445258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.103 [2024-07-15 16:23:13.445271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.103 [2024-07-15 16:23:13.445320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000074 cdw11:00000000 00:06:34.103 [2024-07-15 16:23:13.445333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:34.103 #23 NEW cov: 12078 ft: 14027 corp: 18/112b lim: 10 exec/s: 23 rss: 72Mb L: 10/10 MS: 1 ShuffleBytes- 00:06:34.103 [2024-07-15 16:23:13.484986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:34.103 [2024-07-15 16:23:13.485015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.103 [2024-07-15 16:23:13.485066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000bfff cdw11:00000000 00:06:34.103 [2024-07-15 16:23:13.485079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.103 [2024-07-15 16:23:13.485128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:34.103 [2024-07-15 16:23:13.485141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.103 #24 NEW cov: 12078 ft: 14047 corp: 19/119b lim: 10 exec/s: 24 rss: 72Mb L: 7/10 MS: 1 ChangeBit- 00:06:34.103 [2024-07-15 16:23:13.525088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000600 cdw11:00000000 00:06:34.103 [2024-07-15 16:23:13.525113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.103 [2024-07-15 16:23:13.525164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.103 [2024-07-15 16:23:13.525177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.103 [2024-07-15 16:23:13.525227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 00:06:34.103 [2024-07-15 16:23:13.525239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.103 #25 NEW cov: 12078 ft: 14112 corp: 20/125b lim: 10 exec/s: 25 rss: 72Mb L: 6/10 MS: 1 ChangeBit- 00:06:34.103 [2024-07-15 16:23:13.575316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.103 [2024-07-15 16:23:13.575341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.103 [2024-07-15 16:23:13.575391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.103 [2024-07-15 16:23:13.575404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.103 [2024-07-15 16:23:13.575456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.103 [2024-07-15 16:23:13.575484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.103 [2024-07-15 16:23:13.575536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00002600 cdw11:00000000 00:06:34.103 [2024-07-15 16:23:13.575549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.103 #26 NEW cov: 12078 ft: 14149 corp: 21/134b lim: 10 exec/s: 26 rss: 72Mb L: 9/10 MS: 1 EraseBytes- 00:06:34.103 [2024-07-15 16:23:13.625281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aa9 cdw11:00000000 00:06:34.103 [2024-07-15 16:23:13.625305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.103 [2024-07-15 16:23:13.625356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:34.103 [2024-07-15 16:23:13.625370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.103 #27 NEW cov: 12078 ft: 14287 corp: 22/139b lim: 10 exec/s: 27 rss: 72Mb L: 5/10 MS: 1 InsertRepeatedBytes- 00:06:34.103 [2024-07-15 16:23:13.665487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000400a cdw11:00000000 00:06:34.103 [2024-07-15 16:23:13.665515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.103 [2024-07-15 16:23:13.665568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000aa9 cdw11:00000000 00:06:34.103 [2024-07-15 16:23:13.665581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.103 [2024-07-15 16:23:13.665631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:34.103 [2024-07-15 16:23:13.665645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.362 #28 NEW cov: 12078 ft: 14304 corp: 23/146b lim: 10 exec/s: 28 rss: 73Mb L: 7/10 MS: 1 CrossOver- 00:06:34.362 [2024-07-15 16:23:13.715829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.362 [2024-07-15 16:23:13.715854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.362 [2024-07-15 16:23:13.715906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.362 [2024-07-15 16:23:13.715919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.362 [2024-07-15 16:23:13.715969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.362 [2024-07-15 16:23:13.715982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.362 [2024-07-15 16:23:13.716032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000026 cdw11:00000000 00:06:34.362 [2024-07-15 16:23:13.716044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.362 [2024-07-15 16:23:13.716094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000006e cdw11:00000000 00:06:34.362 [2024-07-15 16:23:13.716107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:34.362 #29 NEW cov: 12078 ft: 14346 corp: 24/156b lim: 10 exec/s: 29 rss: 73Mb L: 10/10 MS: 1 ChangeBinInt- 00:06:34.362 [2024-07-15 16:23:13.755979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.362 [2024-07-15 16:23:13.756003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.362 [2024-07-15 16:23:13.756072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.362 [2024-07-15 16:23:13.756085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.362 [2024-07-15 16:23:13.756137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:34.362 [2024-07-15 16:23:13.756150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.362 [2024-07-15 16:23:13.756203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000026 cdw11:00000000 00:06:34.362 [2024-07-15 16:23:13.756216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.362 [2024-07-15 16:23:13.756268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000006e cdw11:00000000 00:06:34.362 [2024-07-15 16:23:13.756281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:34.362 #30 NEW cov: 12078 ft: 14369 corp: 25/166b lim: 10 exec/s: 30 rss: 73Mb L: 10/10 MS: 1 ChangeBinInt- 00:06:34.362 [2024-07-15 16:23:13.805862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffbf cdw11:00000000 00:06:34.362 [2024-07-15 16:23:13.805886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.362 [2024-07-15 16:23:13.805936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:34.362 [2024-07-15 16:23:13.805949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.362 [2024-07-15 16:23:13.806001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff0a cdw11:00000000 00:06:34.362 [2024-07-15 16:23:13.806014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.362 #31 NEW cov: 12078 ft: 14377 corp: 26/172b lim: 10 exec/s: 31 rss: 73Mb L: 6/10 MS: 1 EraseBytes- 00:06:34.362 [2024-07-15 16:23:13.855974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000fdbf cdw11:00000000 00:06:34.362 [2024-07-15 16:23:13.855998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.362 [2024-07-15 16:23:13.856048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:34.362 [2024-07-15 16:23:13.856061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.362 [2024-07-15 16:23:13.856112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff0a cdw11:00000000 00:06:34.362 [2024-07-15 16:23:13.856140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.362 #32 NEW cov: 12078 ft: 14392 corp: 27/178b lim: 10 exec/s: 32 rss: 73Mb L: 6/10 MS: 1 ChangeBinInt- 00:06:34.362 [2024-07-15 16:23:13.906094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000003f cdw11:00000000 00:06:34.362 [2024-07-15 16:23:13.906118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.362 [2024-07-15 16:23:13.906167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.362 [2024-07-15 16:23:13.906179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.363 [2024-07-15 16:23:13.906228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f8ff cdw11:00000000 00:06:34.363 [2024-07-15 16:23:13.906241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.363 #33 NEW cov: 12078 ft: 14423 corp: 28/185b lim: 10 exec/s: 33 rss: 73Mb L: 7/10 MS: 1 InsertByte- 00:06:34.622 [2024-07-15 16:23:13.956334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:34.622 [2024-07-15 16:23:13.956360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.622 [2024-07-15 16:23:13.956409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:34.622 [2024-07-15 16:23:13.956423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.622 [2024-07-15 16:23:13.956475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000bfff cdw11:00000000 00:06:34.622 [2024-07-15 16:23:13.956488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.622 #34 NEW cov: 12078 ft: 14512 corp: 29/192b lim: 10 exec/s: 34 rss: 73Mb L: 7/10 MS: 1 ChangeByte- 00:06:34.622 [2024-07-15 16:23:13.996390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.622 [2024-07-15 16:23:13.996414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.622 [2024-07-15 16:23:13.996466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.622 [2024-07-15 16:23:13.996480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.622 [2024-07-15 16:23:13.996529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000070 cdw11:00000000 00:06:34.622 [2024-07-15 16:23:13.996542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.622 #35 NEW cov: 12078 ft: 14527 corp: 30/198b lim: 10 exec/s: 35 rss: 73Mb L: 6/10 MS: 1 ChangeBit- 00:06:34.622 [2024-07-15 16:23:14.036630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000bfff cdw11:00000000 00:06:34.622 [2024-07-15 16:23:14.036656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.622 [2024-07-15 16:23:14.036706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:34.622 [2024-07-15 16:23:14.036719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.622 [2024-07-15 16:23:14.036771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffbf cdw11:00000000 00:06:34.622 [2024-07-15 16:23:14.036784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.622 [2024-07-15 16:23:14.036836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff3f cdw11:00000000 00:06:34.622 [2024-07-15 16:23:14.036849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.622 #36 NEW cov: 12078 ft: 14543 corp: 31/206b lim: 10 exec/s: 36 rss: 73Mb L: 8/10 MS: 1 CopyPart- 00:06:34.622 [2024-07-15 16:23:14.086642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:34.622 [2024-07-15 16:23:14.086667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.622 [2024-07-15 16:23:14.086716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000bfff cdw11:00000000 00:06:34.622 [2024-07-15 16:23:14.086729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.622 [2024-07-15 16:23:14.086779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:34.622 [2024-07-15 16:23:14.086791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.622 #37 NEW cov: 12078 ft: 14548 corp: 32/213b lim: 10 exec/s: 37 rss: 73Mb L: 7/10 MS: 1 ShuffleBytes- 00:06:34.622 [2024-07-15 16:23:14.126992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:34.622 [2024-07-15 16:23:14.127016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.622 [2024-07-15 16:23:14.127068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:34.622 [2024-07-15 16:23:14.127080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.622 [2024-07-15 16:23:14.127130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000600 cdw11:00000000 00:06:34.622 [2024-07-15 16:23:14.127159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.622 [2024-07-15 16:23:14.127209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000bfff cdw11:00000000 00:06:34.622 [2024-07-15 16:23:14.127222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.622 [2024-07-15 16:23:14.127271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:06:34.622 [2024-07-15 16:23:14.127283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:34.622 #38 NEW cov: 12078 ft: 14551 corp: 33/223b lim: 10 exec/s: 38 rss: 73Mb L: 10/10 MS: 1 PersAutoDict- DE: "\006\000"- 00:06:34.622 [2024-07-15 16:23:14.176663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d10a cdw11:00000000 00:06:34.622 [2024-07-15 16:23:14.176688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.622 #39 NEW cov: 12078 ft: 14579 corp: 34/225b lim: 10 exec/s: 39 rss: 73Mb L: 2/10 MS: 1 InsertByte- 00:06:34.881 [2024-07-15 16:23:14.217044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000fffb cdw11:00000000 00:06:34.882 [2024-07-15 16:23:14.217068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.882 [2024-07-15 16:23:14.217135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000023ff cdw11:00000000 00:06:34.882 [2024-07-15 16:23:14.217148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.882 [2024-07-15 16:23:14.217209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:34.882 [2024-07-15 16:23:14.217222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.882 #40 NEW cov: 12078 ft: 14586 corp: 35/232b lim: 10 exec/s: 40 rss: 73Mb L: 7/10 MS: 1 ChangeBit- 00:06:34.882 [2024-07-15 16:23:14.267387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.882 [2024-07-15 16:23:14.267411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.882 [2024-07-15 16:23:14.267465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000000e9 cdw11:00000000 00:06:34.882 [2024-07-15 16:23:14.267478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.882 [2024-07-15 16:23:14.267529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000e9e9 cdw11:00000000 00:06:34.882 [2024-07-15 16:23:14.267543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.882 [2024-07-15 16:23:14.267594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000e900 cdw11:00000000 00:06:34.882 [2024-07-15 16:23:14.267607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.882 [2024-07-15 16:23:14.267658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000074 cdw11:00000000 00:06:34.882 [2024-07-15 16:23:14.267670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:34.882 #41 NEW cov: 12078 ft: 14588 corp: 36/242b lim: 10 exec/s: 41 rss: 73Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:06:34.882 [2024-07-15 16:23:14.307532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.882 [2024-07-15 16:23:14.307558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.882 [2024-07-15 16:23:14.307627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000000bf cdw11:00000000 00:06:34.882 [2024-07-15 16:23:14.307640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.882 [2024-07-15 16:23:14.307692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.882 [2024-07-15 16:23:14.307705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.882 [2024-07-15 16:23:14.307756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000026 cdw11:00000000 00:06:34.882 [2024-07-15 16:23:14.307769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.882 [2024-07-15 16:23:14.307821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000074 cdw11:00000000 00:06:34.882 [2024-07-15 16:23:14.307834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:34.882 #42 NEW cov: 12078 ft: 14598 corp: 37/252b lim: 10 exec/s: 42 rss: 73Mb L: 10/10 MS: 1 CrossOver- 00:06:34.882 [2024-07-15 16:23:14.357560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000600 cdw11:00000000 00:06:34.882 [2024-07-15 16:23:14.357586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.882 [2024-07-15 16:23:14.357638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.882 [2024-07-15 16:23:14.357652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.882 [2024-07-15 16:23:14.357703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.882 [2024-07-15 16:23:14.357716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.882 [2024-07-15 16:23:14.357767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 00:06:34.882 [2024-07-15 16:23:14.357780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.882 #43 NEW cov: 12078 ft: 14615 corp: 38/260b lim: 10 exec/s: 43 rss: 73Mb L: 8/10 MS: 1 PersAutoDict- DE: "\006\000"- 00:06:34.882 [2024-07-15 16:23:14.397557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000003f cdw11:00000000 00:06:34.882 [2024-07-15 16:23:14.397589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.882 [2024-07-15 16:23:14.397643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.882 [2024-07-15 16:23:14.397657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.882 [2024-07-15 16:23:14.397705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f8ff cdw11:00000000 00:06:34.882 [2024-07-15 16:23:14.397718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.882 #44 NEW cov: 12078 ft: 14637 corp: 39/267b lim: 10 exec/s: 22 rss: 73Mb L: 7/10 MS: 1 ShuffleBytes- 00:06:34.882 #44 DONE cov: 12078 ft: 14637 corp: 39/267b lim: 10 exec/s: 22 rss: 73Mb 00:06:34.882 ###### Recommended dictionary. ###### 00:06:34.882 "\006\000" # Uses: 3 00:06:34.882 ###### End of recommended dictionary. ###### 00:06:34.882 Done 44 runs in 2 second(s) 00:06:35.141 16:23:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:06:35.141 16:23:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:35.141 16:23:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:35.141 16:23:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:06:35.141 16:23:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:06:35.141 16:23:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:35.141 16:23:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:35.142 16:23:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:35.142 16:23:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:06:35.142 16:23:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:35.142 16:23:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:35.142 16:23:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:06:35.142 16:23:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:06:35.142 16:23:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:35.142 16:23:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:06:35.142 16:23:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:35.142 16:23:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:35.142 16:23:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:35.142 16:23:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:06:35.142 [2024-07-15 16:23:14.601677] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:35.142 [2024-07-15 16:23:14.601750] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2025702 ] 00:06:35.142 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.400 [2024-07-15 16:23:14.783071] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.400 [2024-07-15 16:23:14.849534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.400 [2024-07-15 16:23:14.908391] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:35.400 [2024-07-15 16:23:14.924693] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:06:35.400 INFO: Running with entropic power schedule (0xFF, 100). 00:06:35.400 INFO: Seed: 3037342979 00:06:35.400 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:35.400 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:35.400 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:35.400 INFO: A corpus is not provided, starting from an empty corpus 00:06:35.400 [2024-07-15 16:23:14.980036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.400 [2024-07-15 16:23:14.980065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.659 #2 INITED cov: 11862 ft: 11863 corp: 1/1b exec/s: 0 rss: 70Mb 00:06:35.659 [2024-07-15 16:23:15.020163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.659 [2024-07-15 16:23:15.020188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.659 [2024-07-15 16:23:15.020247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.659 [2024-07-15 16:23:15.020261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.659 #3 NEW cov: 11992 ft: 13022 corp: 2/3b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 InsertByte- 00:06:35.659 [2024-07-15 16:23:15.070154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.659 [2024-07-15 16:23:15.070179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.659 #4 NEW cov: 11998 ft: 13345 corp: 3/4b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 ChangeBit- 00:06:35.659 [2024-07-15 16:23:15.110270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.659 [2024-07-15 16:23:15.110296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.660 #5 NEW cov: 12083 ft: 13578 corp: 4/5b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 ChangeByte- 00:06:35.660 [2024-07-15 16:23:15.160412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.660 [2024-07-15 16:23:15.160437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.660 #6 NEW cov: 12083 ft: 13701 corp: 5/6b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 CrossOver- 00:06:35.660 [2024-07-15 16:23:15.200543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.660 [2024-07-15 16:23:15.200568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.660 #7 NEW cov: 12083 ft: 13833 corp: 6/7b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 ChangeBit- 00:06:35.660 [2024-07-15 16:23:15.240689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.660 [2024-07-15 16:23:15.240713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.919 #8 NEW cov: 12083 ft: 13982 corp: 7/8b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 ChangeByte- 00:06:35.919 [2024-07-15 16:23:15.290873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.919 [2024-07-15 16:23:15.290897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.919 #9 NEW cov: 12083 ft: 14025 corp: 8/9b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 CrossOver- 00:06:35.919 [2024-07-15 16:23:15.330900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.919 [2024-07-15 16:23:15.330924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.919 #10 NEW cov: 12083 ft: 14077 corp: 9/10b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 ChangeBit- 00:06:35.919 [2024-07-15 16:23:15.381237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.919 [2024-07-15 16:23:15.381264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.919 [2024-07-15 16:23:15.381323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.919 [2024-07-15 16:23:15.381336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.919 #11 NEW cov: 12083 ft: 14131 corp: 10/12b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 InsertByte- 00:06:35.919 [2024-07-15 16:23:15.421338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.919 [2024-07-15 16:23:15.421363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.919 [2024-07-15 16:23:15.421421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.919 [2024-07-15 16:23:15.421435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.919 #12 NEW cov: 12083 ft: 14304 corp: 11/14b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CrossOver- 00:06:35.919 [2024-07-15 16:23:15.461287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.919 [2024-07-15 16:23:15.461312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.919 #13 NEW cov: 12083 ft: 14325 corp: 12/15b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 ChangeByte- 00:06:35.919 [2024-07-15 16:23:15.511479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.919 [2024-07-15 16:23:15.511504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.179 #14 NEW cov: 12083 ft: 14394 corp: 13/16b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 ChangeByte- 00:06:36.179 [2024-07-15 16:23:15.551543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.179 [2024-07-15 16:23:15.551569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.179 #15 NEW cov: 12083 ft: 14410 corp: 14/17b lim: 5 exec/s: 0 rss: 71Mb L: 1/2 MS: 1 ChangeBinInt- 00:06:36.179 [2024-07-15 16:23:15.602388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.179 [2024-07-15 16:23:15.602412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.179 [2024-07-15 16:23:15.602502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.179 [2024-07-15 16:23:15.602516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.179 [2024-07-15 16:23:15.602577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.179 [2024-07-15 16:23:15.602591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.179 [2024-07-15 16:23:15.602646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.179 [2024-07-15 16:23:15.602662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.179 [2024-07-15 16:23:15.602737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.179 [2024-07-15 16:23:15.602750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:36.179 #16 NEW cov: 12083 ft: 14766 corp: 15/22b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:36.179 [2024-07-15 16:23:15.642109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.179 [2024-07-15 16:23:15.642134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.179 [2024-07-15 16:23:15.642192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.179 [2024-07-15 16:23:15.642206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.179 [2024-07-15 16:23:15.642261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.179 [2024-07-15 16:23:15.642274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.179 #17 NEW cov: 12083 ft: 14948 corp: 16/25b lim: 5 exec/s: 0 rss: 71Mb L: 3/5 MS: 1 CrossOver- 00:06:36.179 [2024-07-15 16:23:15.692178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.179 [2024-07-15 16:23:15.692202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.179 [2024-07-15 16:23:15.692262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.179 [2024-07-15 16:23:15.692276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.179 #18 NEW cov: 12083 ft: 14998 corp: 17/27b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 InsertByte- 00:06:36.179 [2024-07-15 16:23:15.732250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.179 [2024-07-15 16:23:15.732275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.179 [2024-07-15 16:23:15.732334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.179 [2024-07-15 16:23:15.732347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.179 #19 NEW cov: 12083 ft: 15016 corp: 18/29b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 InsertByte- 00:06:36.438 [2024-07-15 16:23:15.782413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.438 [2024-07-15 16:23:15.782439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.438 [2024-07-15 16:23:15.782504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.438 [2024-07-15 16:23:15.782517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.438 #20 NEW cov: 12083 ft: 15041 corp: 19/31b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 InsertByte- 00:06:36.438 [2024-07-15 16:23:15.822531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.438 [2024-07-15 16:23:15.822557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.438 [2024-07-15 16:23:15.822617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.438 [2024-07-15 16:23:15.822631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.698 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:36.698 #21 NEW cov: 12106 ft: 15072 corp: 20/33b lim: 5 exec/s: 21 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:06:36.698 [2024-07-15 16:23:16.133312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.698 [2024-07-15 16:23:16.133352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.698 [2024-07-15 16:23:16.133425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.698 [2024-07-15 16:23:16.133451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.698 #22 NEW cov: 12106 ft: 15120 corp: 21/35b lim: 5 exec/s: 22 rss: 72Mb L: 2/5 MS: 1 CopyPart- 00:06:36.698 [2024-07-15 16:23:16.183570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.698 [2024-07-15 16:23:16.183597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.698 [2024-07-15 16:23:16.183649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.698 [2024-07-15 16:23:16.183663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.698 [2024-07-15 16:23:16.183714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.698 [2024-07-15 16:23:16.183727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.698 [2024-07-15 16:23:16.183780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.698 [2024-07-15 16:23:16.183793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.698 #23 NEW cov: 12106 ft: 15143 corp: 22/39b lim: 5 exec/s: 23 rss: 72Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:06:36.698 [2024-07-15 16:23:16.223378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.698 [2024-07-15 16:23:16.223404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.698 [2024-07-15 16:23:16.223463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.698 [2024-07-15 16:23:16.223477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.698 #24 NEW cov: 12106 ft: 15152 corp: 23/41b lim: 5 exec/s: 24 rss: 72Mb L: 2/5 MS: 1 ChangeByte- 00:06:36.698 [2024-07-15 16:23:16.273530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.698 [2024-07-15 16:23:16.273555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.698 [2024-07-15 16:23:16.273609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.698 [2024-07-15 16:23:16.273623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.957 #25 NEW cov: 12106 ft: 15163 corp: 24/43b lim: 5 exec/s: 25 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:06:36.957 [2024-07-15 16:23:16.313532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.957 [2024-07-15 16:23:16.313556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.957 #26 NEW cov: 12106 ft: 15198 corp: 25/44b lim: 5 exec/s: 26 rss: 73Mb L: 1/5 MS: 1 EraseBytes- 00:06:36.957 [2024-07-15 16:23:16.363655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.957 [2024-07-15 16:23:16.363680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.957 #27 NEW cov: 12106 ft: 15288 corp: 26/45b lim: 5 exec/s: 27 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:06:36.957 [2024-07-15 16:23:16.404182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.957 [2024-07-15 16:23:16.404207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.957 [2024-07-15 16:23:16.404277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.957 [2024-07-15 16:23:16.404290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.957 [2024-07-15 16:23:16.404345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.957 [2024-07-15 16:23:16.404359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.957 [2024-07-15 16:23:16.404410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.957 [2024-07-15 16:23:16.404423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.957 #28 NEW cov: 12106 ft: 15296 corp: 27/49b lim: 5 exec/s: 28 rss: 73Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:06:36.957 [2024-07-15 16:23:16.453888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.958 [2024-07-15 16:23:16.453912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.958 #29 NEW cov: 12106 ft: 15363 corp: 28/50b lim: 5 exec/s: 29 rss: 73Mb L: 1/5 MS: 1 CrossOver- 00:06:36.958 [2024-07-15 16:23:16.493976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.958 [2024-07-15 16:23:16.494000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.958 #30 NEW cov: 12106 ft: 15405 corp: 29/51b lim: 5 exec/s: 30 rss: 73Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:36.958 [2024-07-15 16:23:16.534084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.958 [2024-07-15 16:23:16.534111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.216 #31 NEW cov: 12106 ft: 15433 corp: 30/52b lim: 5 exec/s: 31 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:06:37.216 [2024-07-15 16:23:16.574388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.216 [2024-07-15 16:23:16.574412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.216 [2024-07-15 16:23:16.574471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.216 [2024-07-15 16:23:16.574485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.216 #32 NEW cov: 12106 ft: 15479 corp: 31/54b lim: 5 exec/s: 32 rss: 73Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:37.216 [2024-07-15 16:23:16.624535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.216 [2024-07-15 16:23:16.624560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.216 [2024-07-15 16:23:16.624615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.216 [2024-07-15 16:23:16.624628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.216 #33 NEW cov: 12106 ft: 15490 corp: 32/56b lim: 5 exec/s: 33 rss: 73Mb L: 2/5 MS: 1 InsertByte- 00:06:37.216 [2024-07-15 16:23:16.664657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.216 [2024-07-15 16:23:16.664681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.216 [2024-07-15 16:23:16.664735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.216 [2024-07-15 16:23:16.664748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.216 #34 NEW cov: 12106 ft: 15519 corp: 33/58b lim: 5 exec/s: 34 rss: 73Mb L: 2/5 MS: 1 CopyPart- 00:06:37.216 [2024-07-15 16:23:16.714625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.216 [2024-07-15 16:23:16.714649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.216 #35 NEW cov: 12106 ft: 15541 corp: 34/59b lim: 5 exec/s: 35 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:06:37.216 [2024-07-15 16:23:16.765050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.216 [2024-07-15 16:23:16.765074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.216 [2024-07-15 16:23:16.765130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.216 [2024-07-15 16:23:16.765144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.216 [2024-07-15 16:23:16.765196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.216 [2024-07-15 16:23:16.765211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.216 #36 NEW cov: 12106 ft: 15549 corp: 35/62b lim: 5 exec/s: 36 rss: 73Mb L: 3/5 MS: 1 InsertByte- 00:06:37.475 [2024-07-15 16:23:16.815058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.475 [2024-07-15 16:23:16.815082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.475 [2024-07-15 16:23:16.815137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.475 [2024-07-15 16:23:16.815150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.475 #37 NEW cov: 12106 ft: 15555 corp: 36/64b lim: 5 exec/s: 37 rss: 73Mb L: 2/5 MS: 1 ChangeBit- 00:06:37.475 [2024-07-15 16:23:16.865021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.475 [2024-07-15 16:23:16.865045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.475 #38 NEW cov: 12106 ft: 15594 corp: 37/65b lim: 5 exec/s: 38 rss: 73Mb L: 1/5 MS: 1 CrossOver- 00:06:37.475 [2024-07-15 16:23:16.915354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.475 [2024-07-15 16:23:16.915378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.475 [2024-07-15 16:23:16.915436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.475 [2024-07-15 16:23:16.915453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.475 #39 NEW cov: 12106 ft: 15601 corp: 38/67b lim: 5 exec/s: 39 rss: 74Mb L: 2/5 MS: 1 ChangeBinInt- 00:06:37.475 [2024-07-15 16:23:16.965315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.475 [2024-07-15 16:23:16.965339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.475 #40 NEW cov: 12106 ft: 15610 corp: 39/68b lim: 5 exec/s: 20 rss: 74Mb L: 1/5 MS: 1 ChangeBit- 00:06:37.475 #40 DONE cov: 12106 ft: 15610 corp: 39/68b lim: 5 exec/s: 20 rss: 74Mb 00:06:37.475 Done 40 runs in 2 second(s) 00:06:37.734 16:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:06:37.734 16:23:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:37.734 16:23:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:37.734 16:23:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:06:37.734 16:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:06:37.734 16:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:37.734 16:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:37.734 16:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:37.734 16:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:06:37.734 16:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:37.734 16:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:37.734 16:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:06:37.734 16:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:06:37.734 16:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:37.734 16:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:06:37.734 16:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:37.734 16:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:37.734 16:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:37.734 16:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:06:37.734 [2024-07-15 16:23:17.150892] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:37.734 [2024-07-15 16:23:17.150961] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2026110 ] 00:06:37.734 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.993 [2024-07-15 16:23:17.331824] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.993 [2024-07-15 16:23:17.397488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.993 [2024-07-15 16:23:17.456286] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.993 [2024-07-15 16:23:17.472617] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:06:37.993 INFO: Running with entropic power schedule (0xFF, 100). 00:06:37.993 INFO: Seed: 1287436804 00:06:37.993 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:37.993 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:37.993 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:37.993 INFO: A corpus is not provided, starting from an empty corpus 00:06:37.993 [2024-07-15 16:23:17.520964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.993 [2024-07-15 16:23:17.520993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.993 #2 INITED cov: 11854 ft: 11856 corp: 1/1b exec/s: 0 rss: 70Mb 00:06:37.993 [2024-07-15 16:23:17.561585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.993 [2024-07-15 16:23:17.561614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.993 [2024-07-15 16:23:17.561671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.993 [2024-07-15 16:23:17.561687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.993 [2024-07-15 16:23:17.561746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.993 [2024-07-15 16:23:17.561759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.993 [2024-07-15 16:23:17.561815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.993 [2024-07-15 16:23:17.561832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.993 [2024-07-15 16:23:17.561888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.993 [2024-07-15 16:23:17.561902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:38.252 #3 NEW cov: 11992 ft: 13241 corp: 2/6b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:38.252 [2024-07-15 16:23:17.611100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.252 [2024-07-15 16:23:17.611125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.252 #4 NEW cov: 11998 ft: 13584 corp: 3/7b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 ChangeByte- 00:06:38.252 [2024-07-15 16:23:17.651396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.252 [2024-07-15 16:23:17.651422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.252 [2024-07-15 16:23:17.651498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.252 [2024-07-15 16:23:17.651512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.252 #5 NEW cov: 12083 ft: 13969 corp: 4/9b lim: 5 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 InsertByte- 00:06:38.252 [2024-07-15 16:23:17.691528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.252 [2024-07-15 16:23:17.691553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.252 [2024-07-15 16:23:17.691628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.252 [2024-07-15 16:23:17.691642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.252 #6 NEW cov: 12083 ft: 14224 corp: 5/11b lim: 5 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 CopyPart- 00:06:38.252 [2024-07-15 16:23:17.741607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.252 [2024-07-15 16:23:17.741632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.252 [2024-07-15 16:23:17.741706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.252 [2024-07-15 16:23:17.741720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.252 #7 NEW cov: 12083 ft: 14366 corp: 6/13b lim: 5 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 CopyPart- 00:06:38.252 [2024-07-15 16:23:17.791611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.252 [2024-07-15 16:23:17.791636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.252 #8 NEW cov: 12083 ft: 14422 corp: 7/14b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 EraseBytes- 00:06:38.252 [2024-07-15 16:23:17.841943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.252 [2024-07-15 16:23:17.841970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.252 [2024-07-15 16:23:17.842031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.252 [2024-07-15 16:23:17.842045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.509 #9 NEW cov: 12083 ft: 14430 corp: 8/16b lim: 5 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 CopyPart- 00:06:38.509 [2024-07-15 16:23:17.882027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.509 [2024-07-15 16:23:17.882051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.509 [2024-07-15 16:23:17.882110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.509 [2024-07-15 16:23:17.882124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.509 #10 NEW cov: 12083 ft: 14477 corp: 9/18b lim: 5 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 CopyPart- 00:06:38.509 [2024-07-15 16:23:17.922432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.509 [2024-07-15 16:23:17.922461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.509 [2024-07-15 16:23:17.922522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.509 [2024-07-15 16:23:17.922535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.509 [2024-07-15 16:23:17.922594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.509 [2024-07-15 16:23:17.922608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.509 [2024-07-15 16:23:17.922665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.509 [2024-07-15 16:23:17.922679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.509 #11 NEW cov: 12083 ft: 14516 corp: 10/22b lim: 5 exec/s: 0 rss: 70Mb L: 4/5 MS: 1 CMP- DE: "\001\000"- 00:06:38.509 [2024-07-15 16:23:17.972112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.509 [2024-07-15 16:23:17.972136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.509 #12 NEW cov: 12083 ft: 14639 corp: 11/23b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 CrossOver- 00:06:38.509 [2024-07-15 16:23:18.022388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.509 [2024-07-15 16:23:18.022413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.509 [2024-07-15 16:23:18.022477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.509 [2024-07-15 16:23:18.022491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.509 #13 NEW cov: 12083 ft: 14679 corp: 12/25b lim: 5 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 ChangeBinInt- 00:06:38.509 [2024-07-15 16:23:18.062318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.509 [2024-07-15 16:23:18.062343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.509 #14 NEW cov: 12083 ft: 14707 corp: 13/26b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:38.509 [2024-07-15 16:23:18.103169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.509 [2024-07-15 16:23:18.103194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.768 [2024-07-15 16:23:18.103254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.768 [2024-07-15 16:23:18.103268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.768 [2024-07-15 16:23:18.103325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.768 [2024-07-15 16:23:18.103340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.768 [2024-07-15 16:23:18.103399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.768 [2024-07-15 16:23:18.103412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.768 #15 NEW cov: 12083 ft: 14739 corp: 14/30b lim: 5 exec/s: 0 rss: 70Mb L: 4/5 MS: 1 ChangeByte- 00:06:38.768 [2024-07-15 16:23:18.153025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.768 [2024-07-15 16:23:18.153049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.768 [2024-07-15 16:23:18.153125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.768 [2024-07-15 16:23:18.153140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.768 [2024-07-15 16:23:18.153197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.768 [2024-07-15 16:23:18.153210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.768 #16 NEW cov: 12083 ft: 14918 corp: 15/33b lim: 5 exec/s: 0 rss: 70Mb L: 3/5 MS: 1 PersAutoDict- DE: "\001\000"- 00:06:38.768 [2024-07-15 16:23:18.202772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.768 [2024-07-15 16:23:18.202796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.768 #17 NEW cov: 12083 ft: 14950 corp: 16/34b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:38.768 [2024-07-15 16:23:18.242903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.768 [2024-07-15 16:23:18.242927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.768 #18 NEW cov: 12083 ft: 14978 corp: 17/35b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:38.768 [2024-07-15 16:23:18.293202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.768 [2024-07-15 16:23:18.293226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.768 [2024-07-15 16:23:18.293284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.768 [2024-07-15 16:23:18.293298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.768 #19 NEW cov: 12083 ft: 14984 corp: 18/37b lim: 5 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 CrossOver- 00:06:38.768 [2024-07-15 16:23:18.333793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.768 [2024-07-15 16:23:18.333817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.768 [2024-07-15 16:23:18.333891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.768 [2024-07-15 16:23:18.333904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.768 [2024-07-15 16:23:18.333963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.768 [2024-07-15 16:23:18.333977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.768 [2024-07-15 16:23:18.334034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.768 [2024-07-15 16:23:18.334047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.768 [2024-07-15 16:23:18.334100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.768 [2024-07-15 16:23:18.334114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:38.768 #20 NEW cov: 12083 ft: 15027 corp: 19/42b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 ChangeByte- 00:06:39.027 [2024-07-15 16:23:18.373250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.027 [2024-07-15 16:23:18.373274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.286 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:39.286 #21 NEW cov: 12106 ft: 15031 corp: 20/43b lim: 5 exec/s: 21 rss: 72Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:39.286 [2024-07-15 16:23:18.684637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.286 [2024-07-15 16:23:18.684670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.286 [2024-07-15 16:23:18.684746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.286 [2024-07-15 16:23:18.684760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.286 [2024-07-15 16:23:18.684816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.286 [2024-07-15 16:23:18.684832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.286 [2024-07-15 16:23:18.684889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.286 [2024-07-15 16:23:18.684902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.286 #22 NEW cov: 12106 ft: 15093 corp: 21/47b lim: 5 exec/s: 22 rss: 72Mb L: 4/5 MS: 1 ChangeBinInt- 00:06:39.286 [2024-07-15 16:23:18.734153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.286 [2024-07-15 16:23:18.734178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.286 #23 NEW cov: 12106 ft: 15140 corp: 22/48b lim: 5 exec/s: 23 rss: 72Mb L: 1/5 MS: 1 ChangeBit- 00:06:39.286 [2024-07-15 16:23:18.774615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.286 [2024-07-15 16:23:18.774642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.286 [2024-07-15 16:23:18.774717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.286 [2024-07-15 16:23:18.774732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.286 [2024-07-15 16:23:18.774789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.286 [2024-07-15 16:23:18.774803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.286 #24 NEW cov: 12106 ft: 15152 corp: 23/51b lim: 5 exec/s: 24 rss: 72Mb L: 3/5 MS: 1 CrossOver- 00:06:39.286 [2024-07-15 16:23:18.824453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.286 [2024-07-15 16:23:18.824478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.286 #25 NEW cov: 12106 ft: 15167 corp: 24/52b lim: 5 exec/s: 25 rss: 72Mb L: 1/5 MS: 1 EraseBytes- 00:06:39.286 [2024-07-15 16:23:18.864692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.286 [2024-07-15 16:23:18.864717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.286 [2024-07-15 16:23:18.864776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.286 [2024-07-15 16:23:18.864790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.545 #26 NEW cov: 12106 ft: 15203 corp: 25/54b lim: 5 exec/s: 26 rss: 72Mb L: 2/5 MS: 1 CopyPart- 00:06:39.546 [2024-07-15 16:23:18.915039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.546 [2024-07-15 16:23:18.915064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.546 [2024-07-15 16:23:18.915119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.546 [2024-07-15 16:23:18.915135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.546 [2024-07-15 16:23:18.915190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.546 [2024-07-15 16:23:18.915203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.546 #27 NEW cov: 12106 ft: 15214 corp: 26/57b lim: 5 exec/s: 27 rss: 72Mb L: 3/5 MS: 1 PersAutoDict- DE: "\001\000"- 00:06:39.546 [2024-07-15 16:23:18.964997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.546 [2024-07-15 16:23:18.965023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.546 [2024-07-15 16:23:18.965081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.546 [2024-07-15 16:23:18.965095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.546 #28 NEW cov: 12106 ft: 15253 corp: 27/59b lim: 5 exec/s: 28 rss: 72Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:39.546 [2024-07-15 16:23:19.015627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.546 [2024-07-15 16:23:19.015654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.546 [2024-07-15 16:23:19.015712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.546 [2024-07-15 16:23:19.015725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.546 [2024-07-15 16:23:19.015779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.546 [2024-07-15 16:23:19.015792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.546 [2024-07-15 16:23:19.015849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.546 [2024-07-15 16:23:19.015862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.546 [2024-07-15 16:23:19.015917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.546 [2024-07-15 16:23:19.015930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:39.546 #29 NEW cov: 12106 ft: 15283 corp: 28/64b lim: 5 exec/s: 29 rss: 72Mb L: 5/5 MS: 1 CopyPart- 00:06:39.546 [2024-07-15 16:23:19.055376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.546 [2024-07-15 16:23:19.055402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.546 [2024-07-15 16:23:19.055461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.546 [2024-07-15 16:23:19.055475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.546 [2024-07-15 16:23:19.055527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.546 [2024-07-15 16:23:19.055544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.546 [2024-07-15 16:23:19.105898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.546 [2024-07-15 16:23:19.105923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.546 [2024-07-15 16:23:19.105981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.546 [2024-07-15 16:23:19.105995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.546 [2024-07-15 16:23:19.106049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.546 [2024-07-15 16:23:19.106062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.546 [2024-07-15 16:23:19.106115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.546 [2024-07-15 16:23:19.106129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.546 [2024-07-15 16:23:19.106182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.546 [2024-07-15 16:23:19.106195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:39.546 #31 NEW cov: 12106 ft: 15290 corp: 29/69b lim: 5 exec/s: 31 rss: 72Mb L: 5/5 MS: 2 ChangeBit-CopyPart- 00:06:39.806 [2024-07-15 16:23:19.145981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.806 [2024-07-15 16:23:19.146006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.806 [2024-07-15 16:23:19.146060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.806 [2024-07-15 16:23:19.146074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.806 [2024-07-15 16:23:19.146129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.806 [2024-07-15 16:23:19.146142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.806 [2024-07-15 16:23:19.146197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.806 [2024-07-15 16:23:19.146210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.806 [2024-07-15 16:23:19.146264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.806 [2024-07-15 16:23:19.146278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:39.806 #32 NEW cov: 12106 ft: 15318 corp: 30/74b lim: 5 exec/s: 32 rss: 73Mb L: 5/5 MS: 1 PersAutoDict- DE: "\001\000"- 00:06:39.806 [2024-07-15 16:23:19.195790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.806 [2024-07-15 16:23:19.195818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.806 [2024-07-15 16:23:19.195873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.806 [2024-07-15 16:23:19.195886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.806 [2024-07-15 16:23:19.195941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.806 [2024-07-15 16:23:19.195954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.806 #33 NEW cov: 12106 ft: 15324 corp: 31/77b lim: 5 exec/s: 33 rss: 73Mb L: 3/5 MS: 1 InsertByte- 00:06:39.806 [2024-07-15 16:23:19.235944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.806 [2024-07-15 16:23:19.235968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.806 [2024-07-15 16:23:19.236023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.806 [2024-07-15 16:23:19.236037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.806 [2024-07-15 16:23:19.236105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.806 [2024-07-15 16:23:19.236120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.806 #34 NEW cov: 12106 ft: 15329 corp: 32/80b lim: 5 exec/s: 34 rss: 73Mb L: 3/5 MS: 1 InsertByte- 00:06:39.806 [2024-07-15 16:23:19.275873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.806 [2024-07-15 16:23:19.275899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.806 [2024-07-15 16:23:19.275956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.806 [2024-07-15 16:23:19.275969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.806 #35 NEW cov: 12106 ft: 15349 corp: 33/82b lim: 5 exec/s: 35 rss: 73Mb L: 2/5 MS: 1 CrossOver- 00:06:39.806 [2024-07-15 16:23:19.316146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.806 [2024-07-15 16:23:19.316171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.806 [2024-07-15 16:23:19.316227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.806 [2024-07-15 16:23:19.316241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.806 [2024-07-15 16:23:19.316294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.806 [2024-07-15 16:23:19.316307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.806 #36 NEW cov: 12106 ft: 15355 corp: 34/85b lim: 5 exec/s: 36 rss: 73Mb L: 3/5 MS: 1 CrossOver- 00:06:39.806 [2024-07-15 16:23:19.366440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.806 [2024-07-15 16:23:19.366469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.806 [2024-07-15 16:23:19.366526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.806 [2024-07-15 16:23:19.366539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.806 [2024-07-15 16:23:19.366596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.806 [2024-07-15 16:23:19.366609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.806 [2024-07-15 16:23:19.366665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.806 [2024-07-15 16:23:19.366678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.806 #37 NEW cov: 12106 ft: 15357 corp: 35/89b lim: 5 exec/s: 37 rss: 73Mb L: 4/5 MS: 1 ChangeBit- 00:06:40.065 [2024-07-15 16:23:19.406754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.065 [2024-07-15 16:23:19.406779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.065 [2024-07-15 16:23:19.406852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.065 [2024-07-15 16:23:19.406865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.065 [2024-07-15 16:23:19.406922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.065 [2024-07-15 16:23:19.406936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.065 [2024-07-15 16:23:19.406992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.065 [2024-07-15 16:23:19.407005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.065 [2024-07-15 16:23:19.407060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.065 [2024-07-15 16:23:19.407074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:40.065 #38 NEW cov: 12106 ft: 15391 corp: 36/94b lim: 5 exec/s: 38 rss: 73Mb L: 5/5 MS: 1 PersAutoDict- DE: "\001\000"- 00:06:40.065 [2024-07-15 16:23:19.456886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.065 [2024-07-15 16:23:19.456911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.065 [2024-07-15 16:23:19.456969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.065 [2024-07-15 16:23:19.456983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.065 [2024-07-15 16:23:19.457039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.065 [2024-07-15 16:23:19.457052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.065 [2024-07-15 16:23:19.457109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.065 [2024-07-15 16:23:19.457121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.065 [2024-07-15 16:23:19.457177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.065 [2024-07-15 16:23:19.457190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:40.065 #39 NEW cov: 12106 ft: 15417 corp: 37/99b lim: 5 exec/s: 39 rss: 73Mb L: 5/5 MS: 1 ChangeBinInt- 00:06:40.065 [2024-07-15 16:23:19.506694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.065 [2024-07-15 16:23:19.506720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.065 [2024-07-15 16:23:19.506775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.065 [2024-07-15 16:23:19.506789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.065 [2024-07-15 16:23:19.506860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.065 [2024-07-15 16:23:19.506874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.065 #40 NEW cov: 12106 ft: 15431 corp: 38/102b lim: 5 exec/s: 20 rss: 73Mb L: 3/5 MS: 1 ChangeBit- 00:06:40.065 #40 DONE cov: 12106 ft: 15431 corp: 38/102b lim: 5 exec/s: 20 rss: 73Mb 00:06:40.065 ###### Recommended dictionary. ###### 00:06:40.065 "\001\000" # Uses: 4 00:06:40.065 ###### End of recommended dictionary. ###### 00:06:40.065 Done 40 runs in 2 second(s) 00:06:40.324 16:23:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:06:40.324 16:23:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:40.324 16:23:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:40.324 16:23:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:06:40.324 16:23:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:06:40.324 16:23:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:40.324 16:23:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:40.324 16:23:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:40.324 16:23:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:06:40.324 16:23:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:40.324 16:23:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:40.324 16:23:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:06:40.324 16:23:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:06:40.324 16:23:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:40.324 16:23:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:06:40.324 16:23:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:40.324 16:23:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:40.324 16:23:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:40.324 16:23:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:06:40.324 [2024-07-15 16:23:19.708331] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:40.324 [2024-07-15 16:23:19.708400] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2026647 ] 00:06:40.324 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.324 [2024-07-15 16:23:19.885380] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.583 [2024-07-15 16:23:19.951103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.583 [2024-07-15 16:23:20.009854] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:40.583 [2024-07-15 16:23:20.026163] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:06:40.583 INFO: Running with entropic power schedule (0xFF, 100). 00:06:40.583 INFO: Seed: 3841773439 00:06:40.583 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:40.583 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:40.583 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:40.583 INFO: A corpus is not provided, starting from an empty corpus 00:06:40.583 #2 INITED exec/s: 0 rss: 64Mb 00:06:40.583 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:40.583 This may also happen if the target rejected all inputs we tried so far 00:06:40.583 [2024-07-15 16:23:20.075219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.583 [2024-07-15 16:23:20.075250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.583 [2024-07-15 16:23:20.075326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.583 [2024-07-15 16:23:20.075340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.583 [2024-07-15 16:23:20.075397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.583 [2024-07-15 16:23:20.075412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.583 [2024-07-15 16:23:20.075475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:12121212 cdw11:12123101 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.583 [2024-07-15 16:23:20.075488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.842 NEW_FUNC[1/695]: 0x490cf0 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:06:40.842 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:40.842 #5 NEW cov: 11885 ft: 11885 corp: 2/33b lim: 40 exec/s: 0 rss: 70Mb L: 32/32 MS: 3 ChangeBinInt-InsertByte-InsertRepeatedBytes- 00:06:40.842 [2024-07-15 16:23:20.405925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a221212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.842 [2024-07-15 16:23:20.405962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.842 [2024-07-15 16:23:20.406022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.842 [2024-07-15 16:23:20.406035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.842 [2024-07-15 16:23:20.406092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.842 [2024-07-15 16:23:20.406106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.842 #9 NEW cov: 12015 ft: 12951 corp: 3/64b lim: 40 exec/s: 0 rss: 70Mb L: 31/32 MS: 4 InsertByte-ChangeBinInt-ShuffleBytes-CrossOver- 00:06:41.100 [2024-07-15 16:23:20.446076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:12129212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.100 [2024-07-15 16:23:20.446102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.100 [2024-07-15 16:23:20.446161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.100 [2024-07-15 16:23:20.446175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.100 [2024-07-15 16:23:20.446230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.100 [2024-07-15 16:23:20.446243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.100 [2024-07-15 16:23:20.446300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:12121212 cdw11:12123101 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.100 [2024-07-15 16:23:20.446313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.100 #10 NEW cov: 12021 ft: 13190 corp: 4/96b lim: 40 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 ChangeBit- 00:06:41.100 [2024-07-15 16:23:20.496073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.100 [2024-07-15 16:23:20.496099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.100 [2024-07-15 16:23:20.496162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.100 [2024-07-15 16:23:20.496175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.100 [2024-07-15 16:23:20.496231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:12121212 cdw11:12123101 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.100 [2024-07-15 16:23:20.496244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.100 #11 NEW cov: 12106 ft: 13428 corp: 5/120b lim: 40 exec/s: 0 rss: 70Mb L: 24/32 MS: 1 EraseBytes- 00:06:41.100 [2024-07-15 16:23:20.536211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:bf121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.100 [2024-07-15 16:23:20.536237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.100 [2024-07-15 16:23:20.536298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.100 [2024-07-15 16:23:20.536311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.100 [2024-07-15 16:23:20.536371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:12121212 cdw11:12121231 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.100 [2024-07-15 16:23:20.536384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.100 #22 NEW cov: 12106 ft: 13507 corp: 6/145b lim: 40 exec/s: 0 rss: 70Mb L: 25/32 MS: 1 InsertByte- 00:06:41.100 [2024-07-15 16:23:20.586349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.100 [2024-07-15 16:23:20.586375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.100 [2024-07-15 16:23:20.586436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.100 [2024-07-15 16:23:20.586455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.100 [2024-07-15 16:23:20.586514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:1212bf12 cdw11:12121231 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.100 [2024-07-15 16:23:20.586527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.100 #23 NEW cov: 12106 ft: 13617 corp: 7/170b lim: 40 exec/s: 0 rss: 70Mb L: 25/32 MS: 1 InsertByte- 00:06:41.100 [2024-07-15 16:23:20.626450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.100 [2024-07-15 16:23:20.626475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.100 [2024-07-15 16:23:20.626532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.100 [2024-07-15 16:23:20.626547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.100 [2024-07-15 16:23:20.626604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:12121212 cdw11:12123001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.100 [2024-07-15 16:23:20.626617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.100 #24 NEW cov: 12106 ft: 13679 corp: 8/194b lim: 40 exec/s: 0 rss: 70Mb L: 24/32 MS: 1 ChangeASCIIInt- 00:06:41.100 [2024-07-15 16:23:20.666551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a221212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.100 [2024-07-15 16:23:20.666577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.100 [2024-07-15 16:23:20.666636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.100 [2024-07-15 16:23:20.666650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.100 [2024-07-15 16:23:20.666719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:12120a22 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.100 [2024-07-15 16:23:20.666732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.359 #25 NEW cov: 12106 ft: 13770 corp: 9/225b lim: 40 exec/s: 0 rss: 71Mb L: 31/32 MS: 1 CopyPart- 00:06:41.359 [2024-07-15 16:23:20.716716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.359 [2024-07-15 16:23:20.716742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.359 [2024-07-15 16:23:20.716801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.359 [2024-07-15 16:23:20.716815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.359 [2024-07-15 16:23:20.716874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.359 [2024-07-15 16:23:20.716887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.359 #30 NEW cov: 12106 ft: 13824 corp: 10/252b lim: 40 exec/s: 0 rss: 71Mb L: 27/32 MS: 5 InsertByte-ShuffleBytes-EraseBytes-ChangeBinInt-InsertRepeatedBytes- 00:06:41.359 [2024-07-15 16:23:20.756852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.359 [2024-07-15 16:23:20.756877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.359 [2024-07-15 16:23:20.756936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:12121210 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.359 [2024-07-15 16:23:20.756950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.359 [2024-07-15 16:23:20.757009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:12121212 cdw11:12123001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.359 [2024-07-15 16:23:20.757022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.359 #31 NEW cov: 12106 ft: 13871 corp: 11/276b lim: 40 exec/s: 0 rss: 71Mb L: 24/32 MS: 1 ChangeBit- 00:06:41.359 [2024-07-15 16:23:20.807107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.359 [2024-07-15 16:23:20.807132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.359 [2024-07-15 16:23:20.807194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.359 [2024-07-15 16:23:20.807207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.359 [2024-07-15 16:23:20.807265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.359 [2024-07-15 16:23:20.807278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.359 [2024-07-15 16:23:20.807335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.359 [2024-07-15 16:23:20.807349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.359 #32 NEW cov: 12106 ft: 13882 corp: 12/314b lim: 40 exec/s: 0 rss: 71Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:06:41.359 [2024-07-15 16:23:20.857253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:12129212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.359 [2024-07-15 16:23:20.857279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.359 [2024-07-15 16:23:20.857338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:12121212 cdw11:12181212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.359 [2024-07-15 16:23:20.857352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.359 [2024-07-15 16:23:20.857408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.359 [2024-07-15 16:23:20.857421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.359 [2024-07-15 16:23:20.857483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:12121212 cdw11:12123101 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.359 [2024-07-15 16:23:20.857496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.359 #33 NEW cov: 12106 ft: 13906 corp: 13/346b lim: 40 exec/s: 0 rss: 71Mb L: 32/38 MS: 1 ChangeBinInt- 00:06:41.359 [2024-07-15 16:23:20.907249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:12129212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.359 [2024-07-15 16:23:20.907274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.359 [2024-07-15 16:23:20.907332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.359 [2024-07-15 16:23:20.907345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.359 [2024-07-15 16:23:20.907402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:12121212 cdw11:12129212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.359 [2024-07-15 16:23:20.907416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.359 #34 NEW cov: 12106 ft: 13931 corp: 14/371b lim: 40 exec/s: 0 rss: 71Mb L: 25/38 MS: 1 CrossOver- 00:06:41.359 [2024-07-15 16:23:20.947462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a221212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.359 [2024-07-15 16:23:20.947488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.359 [2024-07-15 16:23:20.947548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.359 [2024-07-15 16:23:20.947562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.359 [2024-07-15 16:23:20.947632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.359 [2024-07-15 16:23:20.947646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.359 [2024-07-15 16:23:20.947707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:12123112 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.359 [2024-07-15 16:23:20.947720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.617 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:41.617 #35 NEW cov: 12129 ft: 14045 corp: 15/405b lim: 40 exec/s: 0 rss: 71Mb L: 34/38 MS: 1 CrossOver- 00:06:41.617 [2024-07-15 16:23:20.987460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a221212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.617 [2024-07-15 16:23:20.987485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.617 [2024-07-15 16:23:20.987558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.617 [2024-07-15 16:23:20.987572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.617 [2024-07-15 16:23:20.987632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.617 [2024-07-15 16:23:20.987646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.617 #36 NEW cov: 12129 ft: 14063 corp: 16/436b lim: 40 exec/s: 0 rss: 71Mb L: 31/38 MS: 1 ChangeBinInt- 00:06:41.617 [2024-07-15 16:23:21.027597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.617 [2024-07-15 16:23:21.027623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.617 [2024-07-15 16:23:21.027681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:feffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.617 [2024-07-15 16:23:21.027694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.617 [2024-07-15 16:23:21.027753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.617 [2024-07-15 16:23:21.027766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.617 #37 NEW cov: 12129 ft: 14163 corp: 17/463b lim: 40 exec/s: 0 rss: 71Mb L: 27/38 MS: 1 ChangeBinInt- 00:06:41.617 [2024-07-15 16:23:21.067854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.617 [2024-07-15 16:23:21.067879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.617 [2024-07-15 16:23:21.067940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.617 [2024-07-15 16:23:21.067953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.617 [2024-07-15 16:23:21.068010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.617 [2024-07-15 16:23:21.068024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.617 [2024-07-15 16:23:21.068081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:12121212 cdw11:12123101 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.617 [2024-07-15 16:23:21.068094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.617 #38 NEW cov: 12129 ft: 14173 corp: 18/495b lim: 40 exec/s: 38 rss: 71Mb L: 32/38 MS: 1 CopyPart- 00:06:41.617 [2024-07-15 16:23:21.107819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a221212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.617 [2024-07-15 16:23:21.107846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.617 [2024-07-15 16:23:21.107907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.617 [2024-07-15 16:23:21.107920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.617 [2024-07-15 16:23:21.107976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:12120a22 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.617 [2024-07-15 16:23:21.107989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.617 #40 NEW cov: 12129 ft: 14185 corp: 19/525b lim: 40 exec/s: 40 rss: 71Mb L: 30/38 MS: 2 ChangeByte-CrossOver- 00:06:41.617 [2024-07-15 16:23:21.148021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.617 [2024-07-15 16:23:21.148045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.617 [2024-07-15 16:23:21.148104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000026 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.617 [2024-07-15 16:23:21.148118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.617 [2024-07-15 16:23:21.148178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.617 [2024-07-15 16:23:21.148191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.617 [2024-07-15 16:23:21.148248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.617 [2024-07-15 16:23:21.148261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.617 #41 NEW cov: 12129 ft: 14200 corp: 20/563b lim: 40 exec/s: 41 rss: 71Mb L: 38/38 MS: 1 ChangeBinInt- 00:06:41.617 [2024-07-15 16:23:21.198015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:12129212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.617 [2024-07-15 16:23:21.198040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.617 [2024-07-15 16:23:21.198100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.617 [2024-07-15 16:23:21.198113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.617 [2024-07-15 16:23:21.198186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:12121212 cdw11:1212d212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.617 [2024-07-15 16:23:21.198200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.876 #42 NEW cov: 12129 ft: 14222 corp: 21/588b lim: 40 exec/s: 42 rss: 71Mb L: 25/38 MS: 1 ChangeBit- 00:06:41.876 [2024-07-15 16:23:21.248343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:12129212 cdw11:28121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.876 [2024-07-15 16:23:21.248368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.876 [2024-07-15 16:23:21.248428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.876 [2024-07-15 16:23:21.248448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.876 [2024-07-15 16:23:21.248508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.876 [2024-07-15 16:23:21.248521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.876 [2024-07-15 16:23:21.248580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:12121212 cdw11:12123101 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.876 [2024-07-15 16:23:21.248593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.876 #43 NEW cov: 12129 ft: 14243 corp: 22/620b lim: 40 exec/s: 43 rss: 71Mb L: 32/38 MS: 1 ChangeByte- 00:06:41.876 [2024-07-15 16:23:21.288177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.876 [2024-07-15 16:23:21.288201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.876 [2024-07-15 16:23:21.288278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.876 [2024-07-15 16:23:21.288292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.876 #44 NEW cov: 12129 ft: 14542 corp: 23/638b lim: 40 exec/s: 44 rss: 71Mb L: 18/38 MS: 1 EraseBytes- 00:06:41.876 [2024-07-15 16:23:21.338651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.876 [2024-07-15 16:23:21.338676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.876 [2024-07-15 16:23:21.338739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.876 [2024-07-15 16:23:21.338752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.876 [2024-07-15 16:23:21.338813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.876 [2024-07-15 16:23:21.338826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.876 [2024-07-15 16:23:21.338882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:0d121212 cdw11:12123101 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.876 [2024-07-15 16:23:21.338895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.876 #45 NEW cov: 12129 ft: 14544 corp: 24/670b lim: 40 exec/s: 45 rss: 72Mb L: 32/38 MS: 1 ChangeBinInt- 00:06:41.876 [2024-07-15 16:23:21.388758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.876 [2024-07-15 16:23:21.388783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.876 [2024-07-15 16:23:21.388860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:12121212 cdw11:12ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.876 [2024-07-15 16:23:21.388874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.876 [2024-07-15 16:23:21.388932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.876 [2024-07-15 16:23:21.388946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.876 [2024-07-15 16:23:21.389004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.876 [2024-07-15 16:23:21.389017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.876 #46 NEW cov: 12129 ft: 14551 corp: 25/705b lim: 40 exec/s: 46 rss: 72Mb L: 35/38 MS: 1 InsertRepeatedBytes- 00:06:41.876 [2024-07-15 16:23:21.428862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a221212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.876 [2024-07-15 16:23:21.428888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.876 [2024-07-15 16:23:21.428946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.876 [2024-07-15 16:23:21.428959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.876 [2024-07-15 16:23:21.429020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:12120a22 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.876 [2024-07-15 16:23:21.429033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.876 [2024-07-15 16:23:21.429090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:12121212 cdw11:12123141 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.876 [2024-07-15 16:23:21.429103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.876 #47 NEW cov: 12129 ft: 14603 corp: 26/737b lim: 40 exec/s: 47 rss: 72Mb L: 32/38 MS: 1 InsertByte- 00:06:42.135 [2024-07-15 16:23:21.479029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.135 [2024-07-15 16:23:21.479054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.135 [2024-07-15 16:23:21.479115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.135 [2024-07-15 16:23:21.479129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.135 [2024-07-15 16:23:21.479201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.135 [2024-07-15 16:23:21.479215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.135 [2024-07-15 16:23:21.479274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:12121212 cdw11:12123001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.135 [2024-07-15 16:23:21.479287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.135 #48 NEW cov: 12129 ft: 14605 corp: 27/769b lim: 40 exec/s: 48 rss: 72Mb L: 32/38 MS: 1 ChangeASCIIInt- 00:06:42.135 [2024-07-15 16:23:21.518980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.135 [2024-07-15 16:23:21.519004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.135 [2024-07-15 16:23:21.519068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.135 [2024-07-15 16:23:21.519081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.135 [2024-07-15 16:23:21.519139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.135 [2024-07-15 16:23:21.519151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.135 #49 NEW cov: 12129 ft: 14637 corp: 28/796b lim: 40 exec/s: 49 rss: 72Mb L: 27/38 MS: 1 ShuffleBytes- 00:06:42.135 [2024-07-15 16:23:21.559066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.135 [2024-07-15 16:23:21.559091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.135 [2024-07-15 16:23:21.559150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:12121210 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.135 [2024-07-15 16:23:21.559163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.135 [2024-07-15 16:23:21.559223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:12121212 cdw11:12123501 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.135 [2024-07-15 16:23:21.559236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.135 #50 NEW cov: 12129 ft: 14644 corp: 29/820b lim: 40 exec/s: 50 rss: 72Mb L: 24/38 MS: 1 ChangeASCIIInt- 00:06:42.135 [2024-07-15 16:23:21.609263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.135 [2024-07-15 16:23:21.609288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.135 [2024-07-15 16:23:21.609349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.135 [2024-07-15 16:23:21.609362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.135 [2024-07-15 16:23:21.609419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff01 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.135 [2024-07-15 16:23:21.609432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.135 #51 NEW cov: 12129 ft: 14647 corp: 30/844b lim: 40 exec/s: 51 rss: 72Mb L: 24/38 MS: 1 EraseBytes- 00:06:42.135 [2024-07-15 16:23:21.659368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a221212 cdw11:121f0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.135 [2024-07-15 16:23:21.659393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.135 [2024-07-15 16:23:21.659456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.135 [2024-07-15 16:23:21.659469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.135 [2024-07-15 16:23:21.659528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.135 [2024-07-15 16:23:21.659546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.135 #52 NEW cov: 12129 ft: 14653 corp: 31/875b lim: 40 exec/s: 52 rss: 72Mb L: 31/38 MS: 1 ChangeBinInt- 00:06:42.135 [2024-07-15 16:23:21.699448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff002500 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.135 [2024-07-15 16:23:21.699473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.135 [2024-07-15 16:23:21.699534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:feffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.135 [2024-07-15 16:23:21.699547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.136 [2024-07-15 16:23:21.699607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.136 [2024-07-15 16:23:21.699620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.136 #53 NEW cov: 12129 ft: 14661 corp: 32/902b lim: 40 exec/s: 53 rss: 72Mb L: 27/38 MS: 1 ChangeByte- 00:06:42.394 [2024-07-15 16:23:21.739579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:9c121292 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.394 [2024-07-15 16:23:21.739603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.394 [2024-07-15 16:23:21.739664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.394 [2024-07-15 16:23:21.739678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.394 [2024-07-15 16:23:21.739754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:12121212 cdw11:121212d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.394 [2024-07-15 16:23:21.739768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.394 #54 NEW cov: 12129 ft: 14675 corp: 33/928b lim: 40 exec/s: 54 rss: 72Mb L: 26/38 MS: 1 InsertByte- 00:06:42.394 [2024-07-15 16:23:21.789763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a221212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.394 [2024-07-15 16:23:21.789788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.394 [2024-07-15 16:23:21.789849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:12121232 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.394 [2024-07-15 16:23:21.789863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.394 [2024-07-15 16:23:21.789923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.394 [2024-07-15 16:23:21.789936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.394 #55 NEW cov: 12129 ft: 14706 corp: 34/959b lim: 40 exec/s: 55 rss: 72Mb L: 31/38 MS: 1 ChangeBit- 00:06:42.394 [2024-07-15 16:23:21.829836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ff1b00ff cdw11:ff002500 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.394 [2024-07-15 16:23:21.829861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.394 [2024-07-15 16:23:21.829916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:feffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.394 [2024-07-15 16:23:21.829932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.394 [2024-07-15 16:23:21.829988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.394 [2024-07-15 16:23:21.830002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.394 #56 NEW cov: 12129 ft: 14716 corp: 35/986b lim: 40 exec/s: 56 rss: 72Mb L: 27/38 MS: 1 ChangeBinInt- 00:06:42.394 [2024-07-15 16:23:21.880002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.395 [2024-07-15 16:23:21.880026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.395 [2024-07-15 16:23:21.880086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.395 [2024-07-15 16:23:21.880100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.395 [2024-07-15 16:23:21.880157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:12121212 cdw11:12123101 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.395 [2024-07-15 16:23:21.880170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.395 #57 NEW cov: 12129 ft: 14741 corp: 36/1010b lim: 40 exec/s: 57 rss: 72Mb L: 24/38 MS: 1 EraseBytes- 00:06:42.395 [2024-07-15 16:23:21.920238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:121212d1 cdw11:d1d11212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.395 [2024-07-15 16:23:21.920262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.395 [2024-07-15 16:23:21.920320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.395 [2024-07-15 16:23:21.920333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.395 [2024-07-15 16:23:21.920392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.395 [2024-07-15 16:23:21.920405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.395 [2024-07-15 16:23:21.920462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.395 [2024-07-15 16:23:21.920476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.395 #58 NEW cov: 12129 ft: 14748 corp: 37/1045b lim: 40 exec/s: 58 rss: 72Mb L: 35/38 MS: 1 InsertRepeatedBytes- 00:06:42.395 [2024-07-15 16:23:21.960206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:12129212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.395 [2024-07-15 16:23:21.960231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.395 [2024-07-15 16:23:21.960289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.395 [2024-07-15 16:23:21.960302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.395 [2024-07-15 16:23:21.960362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:12121212 cdw11:12269212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.395 [2024-07-15 16:23:21.960376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.395 #59 NEW cov: 12129 ft: 14766 corp: 38/1070b lim: 40 exec/s: 59 rss: 72Mb L: 25/38 MS: 1 ChangeByte- 00:06:42.654 [2024-07-15 16:23:22.000408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a221212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.654 [2024-07-15 16:23:22.000433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.654 [2024-07-15 16:23:22.000495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:31011212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.654 [2024-07-15 16:23:22.000509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.654 [2024-07-15 16:23:22.000583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.654 [2024-07-15 16:23:22.000597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.654 [2024-07-15 16:23:22.000656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.654 [2024-07-15 16:23:22.000670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.654 #60 NEW cov: 12129 ft: 14786 corp: 39/1108b lim: 40 exec/s: 60 rss: 72Mb L: 38/38 MS: 1 CrossOver- 00:06:42.654 [2024-07-15 16:23:22.040525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a221212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.654 [2024-07-15 16:23:22.040550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.654 [2024-07-15 16:23:22.040613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:12121231 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.654 [2024-07-15 16:23:22.040626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.654 [2024-07-15 16:23:22.040690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:12120a22 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.654 [2024-07-15 16:23:22.040703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.654 [2024-07-15 16:23:22.040760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:12121212 cdw11:12123141 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.654 [2024-07-15 16:23:22.040773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.654 #61 NEW cov: 12129 ft: 14791 corp: 40/1140b lim: 40 exec/s: 30 rss: 72Mb L: 32/38 MS: 1 ChangeByte- 00:06:42.654 #61 DONE cov: 12129 ft: 14791 corp: 40/1140b lim: 40 exec/s: 30 rss: 72Mb 00:06:42.654 Done 61 runs in 2 second(s) 00:06:42.654 16:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:06:42.654 16:23:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:42.654 16:23:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:42.654 16:23:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:06:42.654 16:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:06:42.654 16:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:42.654 16:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:42.654 16:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:42.654 16:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:06:42.654 16:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:42.654 16:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:42.654 16:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:06:42.654 16:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:06:42.654 16:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:42.654 16:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:06:42.654 16:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:42.654 16:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:42.654 16:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:42.654 16:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:06:42.654 [2024-07-15 16:23:22.243609] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:42.654 [2024-07-15 16:23:22.243700] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2026989 ] 00:06:42.913 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.913 [2024-07-15 16:23:22.423599] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.913 [2024-07-15 16:23:22.490248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.171 [2024-07-15 16:23:22.549483] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:43.171 [2024-07-15 16:23:22.565792] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:06:43.171 INFO: Running with entropic power schedule (0xFF, 100). 00:06:43.171 INFO: Seed: 2087416608 00:06:43.171 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:43.171 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:43.171 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:43.171 INFO: A corpus is not provided, starting from an empty corpus 00:06:43.171 #2 INITED exec/s: 0 rss: 63Mb 00:06:43.171 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:43.171 This may also happen if the target rejected all inputs we tried so far 00:06:43.171 [2024-07-15 16:23:22.631752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:8bffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.171 [2024-07-15 16:23:22.631787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.429 NEW_FUNC[1/696]: 0x492a60 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:06:43.429 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:43.429 #6 NEW cov: 11895 ft: 11898 corp: 2/10b lim: 40 exec/s: 0 rss: 70Mb L: 9/9 MS: 4 ShuffleBytes-ChangeBit-ChangeBit-InsertRepeatedBytes- 00:06:43.429 [2024-07-15 16:23:22.973526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.429 [2024-07-15 16:23:22.973568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.429 [2024-07-15 16:23:22.973704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.429 [2024-07-15 16:23:22.973722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.429 [2024-07-15 16:23:22.973850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.429 [2024-07-15 16:23:22.973870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.429 [2024-07-15 16:23:22.973997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.429 [2024-07-15 16:23:22.974014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.429 #7 NEW cov: 12027 ft: 13405 corp: 3/44b lim: 40 exec/s: 0 rss: 70Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:06:43.687 [2024-07-15 16:23:23.032724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:8bffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.687 [2024-07-15 16:23:23.032752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.687 #10 NEW cov: 12033 ft: 13667 corp: 4/58b lim: 40 exec/s: 0 rss: 70Mb L: 14/34 MS: 3 EraseBytes-ShuffleBytes-CrossOver- 00:06:43.687 [2024-07-15 16:23:23.082936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:8bffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.687 [2024-07-15 16:23:23.082961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.687 #11 NEW cov: 12118 ft: 13936 corp: 5/67b lim: 40 exec/s: 0 rss: 70Mb L: 9/34 MS: 1 CopyPart- 00:06:43.687 [2024-07-15 16:23:23.123795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.687 [2024-07-15 16:23:23.123822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.687 [2024-07-15 16:23:23.123965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.687 [2024-07-15 16:23:23.123981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.687 [2024-07-15 16:23:23.124102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.687 [2024-07-15 16:23:23.124120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.687 [2024-07-15 16:23:23.124246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.687 [2024-07-15 16:23:23.124264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.687 #12 NEW cov: 12118 ft: 14036 corp: 6/106b lim: 40 exec/s: 0 rss: 70Mb L: 39/39 MS: 1 CrossOver- 00:06:43.687 [2024-07-15 16:23:23.183803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.687 [2024-07-15 16:23:23.183832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.687 [2024-07-15 16:23:23.183970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.687 [2024-07-15 16:23:23.183988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.687 [2024-07-15 16:23:23.184117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.687 [2024-07-15 16:23:23.184135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.687 #13 NEW cov: 12118 ft: 14298 corp: 7/136b lim: 40 exec/s: 0 rss: 70Mb L: 30/39 MS: 1 InsertRepeatedBytes- 00:06:43.687 [2024-07-15 16:23:23.234211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.687 [2024-07-15 16:23:23.234238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.687 [2024-07-15 16:23:23.234381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.687 [2024-07-15 16:23:23.234399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.687 [2024-07-15 16:23:23.234530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.687 [2024-07-15 16:23:23.234545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.687 [2024-07-15 16:23:23.234679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.687 [2024-07-15 16:23:23.234696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.687 #14 NEW cov: 12118 ft: 14370 corp: 8/170b lim: 40 exec/s: 0 rss: 70Mb L: 34/39 MS: 1 ShuffleBytes- 00:06:43.687 [2024-07-15 16:23:23.273834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.688 [2024-07-15 16:23:23.273860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.688 [2024-07-15 16:23:23.273982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.688 [2024-07-15 16:23:23.273999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.688 [2024-07-15 16:23:23.274122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff8bffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.688 [2024-07-15 16:23:23.274140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.946 #19 NEW cov: 12118 ft: 14416 corp: 9/197b lim: 40 exec/s: 0 rss: 70Mb L: 27/39 MS: 5 EraseBytes-ChangeBit-CopyPart-InsertByte-InsertRepeatedBytes- 00:06:43.946 [2024-07-15 16:23:23.313761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.946 [2024-07-15 16:23:23.313787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.946 [2024-07-15 16:23:23.313920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.946 [2024-07-15 16:23:23.313940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.946 [2024-07-15 16:23:23.314067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffff0a cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.946 [2024-07-15 16:23:23.314083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.946 #20 NEW cov: 12118 ft: 14452 corp: 10/228b lim: 40 exec/s: 0 rss: 70Mb L: 31/39 MS: 1 CrossOver- 00:06:43.946 [2024-07-15 16:23:23.363921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.946 [2024-07-15 16:23:23.363946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.946 [2024-07-15 16:23:23.364073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.946 [2024-07-15 16:23:23.364089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.946 [2024-07-15 16:23:23.364216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffff0a cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.946 [2024-07-15 16:23:23.364234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.946 #21 NEW cov: 12118 ft: 14513 corp: 11/259b lim: 40 exec/s: 0 rss: 71Mb L: 31/39 MS: 1 ShuffleBytes- 00:06:43.946 [2024-07-15 16:23:23.424738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.946 [2024-07-15 16:23:23.424765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.946 [2024-07-15 16:23:23.424889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.946 [2024-07-15 16:23:23.424907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.946 [2024-07-15 16:23:23.425035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffff0a cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.946 [2024-07-15 16:23:23.425051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.946 [2024-07-15 16:23:23.425176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.946 [2024-07-15 16:23:23.425191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.946 #22 NEW cov: 12118 ft: 14523 corp: 12/297b lim: 40 exec/s: 0 rss: 71Mb L: 38/39 MS: 1 InsertRepeatedBytes- 00:06:43.946 [2024-07-15 16:23:23.484401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.946 [2024-07-15 16:23:23.484427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.946 [2024-07-15 16:23:23.484574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffff8bff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.946 [2024-07-15 16:23:23.484593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.946 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:43.946 #23 NEW cov: 12141 ft: 14782 corp: 13/316b lim: 40 exec/s: 0 rss: 71Mb L: 19/39 MS: 1 CrossOver- 00:06:43.946 [2024-07-15 16:23:23.524798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.946 [2024-07-15 16:23:23.524825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.946 [2024-07-15 16:23:23.524957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.946 [2024-07-15 16:23:23.524973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.946 [2024-07-15 16:23:23.525105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.946 [2024-07-15 16:23:23.525124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.204 #29 NEW cov: 12141 ft: 14822 corp: 14/346b lim: 40 exec/s: 0 rss: 71Mb L: 30/39 MS: 1 ChangeBinInt- 00:06:44.204 [2024-07-15 16:23:23.574936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.204 [2024-07-15 16:23:23.574961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.204 [2024-07-15 16:23:23.575092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff8bffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.204 [2024-07-15 16:23:23.575111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.204 [2024-07-15 16:23:23.575242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.204 [2024-07-15 16:23:23.575259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.204 #30 NEW cov: 12141 ft: 14848 corp: 15/372b lim: 40 exec/s: 0 rss: 71Mb L: 26/39 MS: 1 CrossOver- 00:06:44.204 [2024-07-15 16:23:23.615030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.204 [2024-07-15 16:23:23.615058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.204 [2024-07-15 16:23:23.615192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.204 [2024-07-15 16:23:23.615210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.204 [2024-07-15 16:23:23.615336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff8bffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.204 [2024-07-15 16:23:23.615353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.204 #31 NEW cov: 12141 ft: 14900 corp: 16/399b lim: 40 exec/s: 31 rss: 71Mb L: 27/39 MS: 1 CrossOver- 00:06:44.204 [2024-07-15 16:23:23.674164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:8bff0aff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.204 [2024-07-15 16:23:23.674192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.204 #32 NEW cov: 12141 ft: 14962 corp: 17/409b lim: 40 exec/s: 32 rss: 71Mb L: 10/39 MS: 1 CrossOver- 00:06:44.204 [2024-07-15 16:23:23.725604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.204 [2024-07-15 16:23:23.725635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.204 [2024-07-15 16:23:23.725761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.204 [2024-07-15 16:23:23.725778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.204 [2024-07-15 16:23:23.725920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffff3dff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.204 [2024-07-15 16:23:23.725938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.204 [2024-07-15 16:23:23.726066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.204 [2024-07-15 16:23:23.726082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.204 #33 NEW cov: 12141 ft: 15016 corp: 18/443b lim: 40 exec/s: 33 rss: 71Mb L: 34/39 MS: 1 ChangeByte- 00:06:44.204 [2024-07-15 16:23:23.775757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.204 [2024-07-15 16:23:23.775786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.204 [2024-07-15 16:23:23.775915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.204 [2024-07-15 16:23:23.775932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.204 [2024-07-15 16:23:23.776066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.204 [2024-07-15 16:23:23.776083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.204 [2024-07-15 16:23:23.776218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.204 [2024-07-15 16:23:23.776236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.463 #34 NEW cov: 12141 ft: 15044 corp: 19/477b lim: 40 exec/s: 34 rss: 71Mb L: 34/39 MS: 1 ChangeBinInt- 00:06:44.463 [2024-07-15 16:23:23.835490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.463 [2024-07-15 16:23:23.835518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.463 [2024-07-15 16:23:23.835657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.463 [2024-07-15 16:23:23.835675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.463 [2024-07-15 16:23:23.835805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:04000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.463 [2024-07-15 16:23:23.835824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.463 [2024-07-15 16:23:23.835956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.463 [2024-07-15 16:23:23.835975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.463 #35 NEW cov: 12141 ft: 15127 corp: 20/511b lim: 40 exec/s: 35 rss: 71Mb L: 34/39 MS: 1 ChangeBinInt- 00:06:44.463 [2024-07-15 16:23:23.875859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.463 [2024-07-15 16:23:23.875887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.463 [2024-07-15 16:23:23.876023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.463 [2024-07-15 16:23:23.876040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.463 [2024-07-15 16:23:23.876168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0400 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.463 [2024-07-15 16:23:23.876186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.463 [2024-07-15 16:23:23.876317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0000ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.463 [2024-07-15 16:23:23.876335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.463 [2024-07-15 16:23:23.876460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.463 [2024-07-15 16:23:23.876478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:44.463 #36 NEW cov: 12141 ft: 15190 corp: 21/551b lim: 40 exec/s: 36 rss: 71Mb L: 40/40 MS: 1 CopyPart- 00:06:44.463 [2024-07-15 16:23:23.935387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:8bffffff cdw11:fffffdff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.463 [2024-07-15 16:23:23.935415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.463 #37 NEW cov: 12141 ft: 15250 corp: 22/560b lim: 40 exec/s: 37 rss: 71Mb L: 9/40 MS: 1 ChangeBit- 00:06:44.463 [2024-07-15 16:23:23.986083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffbfffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.463 [2024-07-15 16:23:23.986111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.463 [2024-07-15 16:23:23.986248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.463 [2024-07-15 16:23:23.986266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.463 [2024-07-15 16:23:23.986391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff8bffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.463 [2024-07-15 16:23:23.986409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.463 #38 NEW cov: 12141 ft: 15281 corp: 23/587b lim: 40 exec/s: 38 rss: 71Mb L: 27/40 MS: 1 ChangeBit- 00:06:44.463 [2024-07-15 16:23:24.036278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffbf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.463 [2024-07-15 16:23:24.036306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.463 [2024-07-15 16:23:24.036427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.463 [2024-07-15 16:23:24.036453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.463 [2024-07-15 16:23:24.036579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffff0a cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.463 [2024-07-15 16:23:24.036597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.723 #39 NEW cov: 12141 ft: 15298 corp: 24/618b lim: 40 exec/s: 39 rss: 71Mb L: 31/40 MS: 1 ChangeBit- 00:06:44.723 [2024-07-15 16:23:24.085921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a8bff0a cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.723 [2024-07-15 16:23:24.085951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.723 #40 NEW cov: 12141 ft: 15309 corp: 25/629b lim: 40 exec/s: 40 rss: 71Mb L: 11/40 MS: 1 CrossOver- 00:06:44.723 [2024-07-15 16:23:24.136862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.723 [2024-07-15 16:23:24.136891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.723 [2024-07-15 16:23:24.137018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.723 [2024-07-15 16:23:24.137035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.723 [2024-07-15 16:23:24.137170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:04000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.723 [2024-07-15 16:23:24.137188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.723 [2024-07-15 16:23:24.137319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffff30ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.723 [2024-07-15 16:23:24.137335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.723 #41 NEW cov: 12141 ft: 15380 corp: 26/663b lim: 40 exec/s: 41 rss: 71Mb L: 34/40 MS: 1 ChangeByte- 00:06:44.723 [2024-07-15 16:23:24.187023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0aff0aff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.723 [2024-07-15 16:23:24.187050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.723 [2024-07-15 16:23:24.187178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.723 [2024-07-15 16:23:24.187195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.723 [2024-07-15 16:23:24.187321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffff0400 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.723 [2024-07-15 16:23:24.187338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.723 [2024-07-15 16:23:24.187465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.723 [2024-07-15 16:23:24.187484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.723 #42 NEW cov: 12141 ft: 15430 corp: 27/697b lim: 40 exec/s: 42 rss: 71Mb L: 34/40 MS: 1 CopyPart- 00:06:44.723 [2024-07-15 16:23:24.236911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.723 [2024-07-15 16:23:24.236942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.723 [2024-07-15 16:23:24.237079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:fdffffff cdw11:ff8bffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.723 [2024-07-15 16:23:24.237098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.723 [2024-07-15 16:23:24.237226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.723 [2024-07-15 16:23:24.237247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.723 #43 NEW cov: 12141 ft: 15456 corp: 28/723b lim: 40 exec/s: 43 rss: 71Mb L: 26/40 MS: 1 ChangeBit- 00:06:44.723 [2024-07-15 16:23:24.297392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.723 [2024-07-15 16:23:24.297420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.723 [2024-07-15 16:23:24.297567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.723 [2024-07-15 16:23:24.297584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.723 [2024-07-15 16:23:24.297715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.723 [2024-07-15 16:23:24.297732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.723 [2024-07-15 16:23:24.297859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.723 [2024-07-15 16:23:24.297875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.982 #44 NEW cov: 12141 ft: 15460 corp: 29/758b lim: 40 exec/s: 44 rss: 71Mb L: 35/40 MS: 1 InsertByte- 00:06:44.983 [2024-07-15 16:23:24.347235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.983 [2024-07-15 16:23:24.347262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.983 [2024-07-15 16:23:24.347398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.983 [2024-07-15 16:23:24.347416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.983 [2024-07-15 16:23:24.347558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.983 [2024-07-15 16:23:24.347577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.983 #45 NEW cov: 12141 ft: 15472 corp: 30/783b lim: 40 exec/s: 45 rss: 71Mb L: 25/40 MS: 1 EraseBytes- 00:06:44.983 [2024-07-15 16:23:24.407743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff0affff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.983 [2024-07-15 16:23:24.407770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.983 [2024-07-15 16:23:24.407909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.983 [2024-07-15 16:23:24.407928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.983 [2024-07-15 16:23:24.408058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffff0a cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.983 [2024-07-15 16:23:24.408075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.983 [2024-07-15 16:23:24.408207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.983 [2024-07-15 16:23:24.408223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.983 #46 NEW cov: 12141 ft: 15483 corp: 31/821b lim: 40 exec/s: 46 rss: 71Mb L: 38/40 MS: 1 CopyPart- 00:06:44.983 [2024-07-15 16:23:24.457512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:fffffbff cdw11:ffbfffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.983 [2024-07-15 16:23:24.457537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.983 [2024-07-15 16:23:24.457674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.983 [2024-07-15 16:23:24.457691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.983 [2024-07-15 16:23:24.457820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff8bffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.983 [2024-07-15 16:23:24.457837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.983 #47 NEW cov: 12141 ft: 15493 corp: 32/848b lim: 40 exec/s: 47 rss: 72Mb L: 27/40 MS: 1 ChangeBit- 00:06:44.983 [2024-07-15 16:23:24.507942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffff0101 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.983 [2024-07-15 16:23:24.507970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.983 [2024-07-15 16:23:24.508098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.983 [2024-07-15 16:23:24.508113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.983 [2024-07-15 16:23:24.508241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.983 [2024-07-15 16:23:24.508258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.983 [2024-07-15 16:23:24.508391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.983 [2024-07-15 16:23:24.508409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.983 #48 NEW cov: 12141 ft: 15494 corp: 33/882b lim: 40 exec/s: 48 rss: 72Mb L: 34/40 MS: 1 ChangeBinInt- 00:06:44.983 [2024-07-15 16:23:24.547659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.983 [2024-07-15 16:23:24.547686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.983 [2024-07-15 16:23:24.547822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.983 [2024-07-15 16:23:24.547842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.983 [2024-07-15 16:23:24.547978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff8b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.983 [2024-07-15 16:23:24.547994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.983 #49 NEW cov: 12141 ft: 15501 corp: 34/912b lim: 40 exec/s: 49 rss: 72Mb L: 30/40 MS: 1 CrossOver- 00:06:45.242 [2024-07-15 16:23:24.587199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:82828282 cdw11:8282828b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.242 [2024-07-15 16:23:24.587225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.242 [2024-07-15 16:23:24.587360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.242 [2024-07-15 16:23:24.587377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.242 #50 NEW cov: 12141 ft: 15538 corp: 35/928b lim: 40 exec/s: 25 rss: 72Mb L: 16/40 MS: 1 InsertRepeatedBytes- 00:06:45.242 #50 DONE cov: 12141 ft: 15538 corp: 35/928b lim: 40 exec/s: 25 rss: 72Mb 00:06:45.242 Done 50 runs in 2 second(s) 00:06:45.242 16:23:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:06:45.242 16:23:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:45.242 16:23:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:45.242 16:23:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:06:45.242 16:23:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:06:45.243 16:23:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:45.243 16:23:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:45.243 16:23:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:45.243 16:23:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:06:45.243 16:23:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:45.243 16:23:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:45.243 16:23:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:06:45.243 16:23:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:06:45.243 16:23:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:45.243 16:23:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:06:45.243 16:23:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:45.243 16:23:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:45.243 16:23:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:45.243 16:23:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:06:45.243 [2024-07-15 16:23:24.792220] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:45.243 [2024-07-15 16:23:24.792289] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2027471 ] 00:06:45.243 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.502 [2024-07-15 16:23:24.967974] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.502 [2024-07-15 16:23:25.036705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.502 [2024-07-15 16:23:25.095484] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:45.760 [2024-07-15 16:23:25.111746] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:06:45.760 INFO: Running with entropic power schedule (0xFF, 100). 00:06:45.760 INFO: Seed: 339445758 00:06:45.760 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:45.760 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:45.760 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:45.760 INFO: A corpus is not provided, starting from an empty corpus 00:06:45.760 #2 INITED exec/s: 0 rss: 63Mb 00:06:45.760 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:45.760 This may also happen if the target rejected all inputs we tried so far 00:06:45.760 [2024-07-15 16:23:25.156435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:63636363 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.760 [2024-07-15 16:23:25.156476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.018 NEW_FUNC[1/696]: 0x4947d0 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:06:46.018 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:46.018 #8 NEW cov: 11895 ft: 11896 corp: 2/13b lim: 40 exec/s: 0 rss: 70Mb L: 12/12 MS: 1 InsertRepeatedBytes- 00:06:46.018 [2024-07-15 16:23:25.497258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:63636363 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.018 [2024-07-15 16:23:25.497296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.018 #9 NEW cov: 12025 ft: 12358 corp: 3/26b lim: 40 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 InsertByte- 00:06:46.018 [2024-07-15 16:23:25.577397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:63636363 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.018 [2024-07-15 16:23:25.577429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.276 #10 NEW cov: 12031 ft: 12635 corp: 4/39b lim: 40 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 CopyPart- 00:06:46.276 [2024-07-15 16:23:25.657568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:63636363 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.276 [2024-07-15 16:23:25.657597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.276 #16 NEW cov: 12116 ft: 12863 corp: 5/51b lim: 40 exec/s: 0 rss: 70Mb L: 12/13 MS: 1 ChangeByte- 00:06:46.276 [2024-07-15 16:23:25.707687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:6363630a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.276 [2024-07-15 16:23:25.707717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.276 #17 NEW cov: 12116 ft: 13023 corp: 6/64b lim: 40 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 CrossOver- 00:06:46.276 [2024-07-15 16:23:25.757820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:6363635b cdw11:6363630a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.276 [2024-07-15 16:23:25.757850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.276 #23 NEW cov: 12116 ft: 13156 corp: 7/77b lim: 40 exec/s: 0 rss: 71Mb L: 13/13 MS: 1 ChangeByte- 00:06:46.276 [2024-07-15 16:23:25.838034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:63630a63 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.276 [2024-07-15 16:23:25.838064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.537 #24 NEW cov: 12116 ft: 13232 corp: 8/90b lim: 40 exec/s: 0 rss: 71Mb L: 13/13 MS: 1 ShuffleBytes- 00:06:46.537 [2024-07-15 16:23:25.888310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:1f1f1f1f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.537 [2024-07-15 16:23:25.888339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.537 [2024-07-15 16:23:25.888386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.537 [2024-07-15 16:23:25.888401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.537 [2024-07-15 16:23:25.888430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.537 [2024-07-15 16:23:25.888451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.537 [2024-07-15 16:23:25.888480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:1f1f1f63 cdw11:63636363 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.537 [2024-07-15 16:23:25.888495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.537 #25 NEW cov: 12116 ft: 14127 corp: 9/126b lim: 40 exec/s: 0 rss: 71Mb L: 36/36 MS: 1 InsertRepeatedBytes- 00:06:46.537 [2024-07-15 16:23:25.968390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:6363635c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.537 [2024-07-15 16:23:25.968418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.537 [2024-07-15 16:23:25.968472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5c5c5c5c cdw11:63636363 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.537 [2024-07-15 16:23:25.968488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.537 #26 NEW cov: 12116 ft: 14394 corp: 10/144b lim: 40 exec/s: 0 rss: 71Mb L: 18/36 MS: 1 InsertRepeatedBytes- 00:06:46.537 [2024-07-15 16:23:26.028671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:1f1f1f1f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.537 [2024-07-15 16:23:26.028700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.537 [2024-07-15 16:23:26.028732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.537 [2024-07-15 16:23:26.028747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.537 [2024-07-15 16:23:26.028774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.537 [2024-07-15 16:23:26.028788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.537 [2024-07-15 16:23:26.028815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:1f1f631f cdw11:63636363 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.537 [2024-07-15 16:23:26.028850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.537 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:46.537 #27 NEW cov: 12139 ft: 14463 corp: 11/180b lim: 40 exec/s: 0 rss: 71Mb L: 36/36 MS: 1 ShuffleBytes- 00:06:46.537 [2024-07-15 16:23:26.108727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:63630063 cdw11:63636363 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.537 [2024-07-15 16:23:26.108756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.854 #28 NEW cov: 12139 ft: 14491 corp: 12/194b lim: 40 exec/s: 28 rss: 71Mb L: 14/36 MS: 1 InsertByte- 00:06:46.854 [2024-07-15 16:23:26.158934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:6363ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.854 [2024-07-15 16:23:26.158965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.854 [2024-07-15 16:23:26.158999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:63636363 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.854 [2024-07-15 16:23:26.159015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.854 #29 NEW cov: 12139 ft: 14526 corp: 13/212b lim: 40 exec/s: 29 rss: 71Mb L: 18/36 MS: 1 InsertRepeatedBytes- 00:06:46.854 [2024-07-15 16:23:26.219105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:63630000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.854 [2024-07-15 16:23:26.219134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.854 [2024-07-15 16:23:26.219182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.854 [2024-07-15 16:23:26.219199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.854 [2024-07-15 16:23:26.219228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.854 [2024-07-15 16:23:26.219243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.854 #30 NEW cov: 12139 ft: 14830 corp: 14/243b lim: 40 exec/s: 30 rss: 71Mb L: 31/36 MS: 1 InsertRepeatedBytes- 00:06:46.854 [2024-07-15 16:23:26.299199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:63636363 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.855 [2024-07-15 16:23:26.299227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.855 #36 NEW cov: 12139 ft: 14856 corp: 15/256b lim: 40 exec/s: 36 rss: 71Mb L: 13/36 MS: 1 ChangeBit- 00:06:46.855 [2024-07-15 16:23:26.349326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:63630063 cdw11:e3636363 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.855 [2024-07-15 16:23:26.349355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.855 #37 NEW cov: 12139 ft: 14897 corp: 16/270b lim: 40 exec/s: 37 rss: 71Mb L: 14/36 MS: 1 ChangeBit- 00:06:46.855 [2024-07-15 16:23:26.429617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:63636363 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.855 [2024-07-15 16:23:26.429648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.114 #38 NEW cov: 12139 ft: 14908 corp: 17/282b lim: 40 exec/s: 38 rss: 71Mb L: 12/36 MS: 1 ShuffleBytes- 00:06:47.114 [2024-07-15 16:23:26.479687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:63636363 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.114 [2024-07-15 16:23:26.479715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.114 #39 NEW cov: 12139 ft: 14928 corp: 18/295b lim: 40 exec/s: 39 rss: 71Mb L: 13/36 MS: 1 ChangeBit- 00:06:47.114 [2024-07-15 16:23:26.559885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:63630063 cdw11:e3636363 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.114 [2024-07-15 16:23:26.559913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.114 #40 NEW cov: 12139 ft: 14934 corp: 19/309b lim: 40 exec/s: 40 rss: 71Mb L: 14/36 MS: 1 CrossOver- 00:06:47.114 [2024-07-15 16:23:26.640090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:63630a63 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.114 [2024-07-15 16:23:26.640120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.114 #41 NEW cov: 12139 ft: 14965 corp: 20/322b lim: 40 exec/s: 41 rss: 71Mb L: 13/36 MS: 1 ShuffleBytes- 00:06:47.114 [2024-07-15 16:23:26.690215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:63636363 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.114 [2024-07-15 16:23:26.690245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.373 #42 NEW cov: 12139 ft: 14971 corp: 21/335b lim: 40 exec/s: 42 rss: 71Mb L: 13/36 MS: 1 InsertByte- 00:06:47.373 [2024-07-15 16:23:26.750377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:63634363 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.373 [2024-07-15 16:23:26.750407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.373 #43 NEW cov: 12139 ft: 14999 corp: 22/346b lim: 40 exec/s: 43 rss: 72Mb L: 11/36 MS: 1 EraseBytes- 00:06:47.373 [2024-07-15 16:23:26.830643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:63630000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.373 [2024-07-15 16:23:26.830672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.373 [2024-07-15 16:23:26.830720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00006363 cdw11:4363632c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.373 [2024-07-15 16:23:26.830736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.373 #44 NEW cov: 12139 ft: 15003 corp: 23/363b lim: 40 exec/s: 44 rss: 72Mb L: 17/36 MS: 1 CMP- DE: "\000\000\000\000"- 00:06:47.373 [2024-07-15 16:23:26.890754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:6363635c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.373 [2024-07-15 16:23:26.890783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.373 #45 NEW cov: 12139 ft: 15031 corp: 24/376b lim: 40 exec/s: 45 rss: 72Mb L: 13/36 MS: 1 EraseBytes- 00:06:47.373 [2024-07-15 16:23:26.961550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:6363635c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.373 [2024-07-15 16:23:26.961577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.636 #46 NEW cov: 12139 ft: 15131 corp: 25/389b lim: 40 exec/s: 46 rss: 72Mb L: 13/36 MS: 1 ShuffleBytes- 00:06:47.636 [2024-07-15 16:23:27.011713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:63634363 cdw11:6363630a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.636 [2024-07-15 16:23:27.011741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.636 #47 NEW cov: 12139 ft: 15268 corp: 26/402b lim: 40 exec/s: 47 rss: 72Mb L: 13/36 MS: 1 ChangeBit- 00:06:47.636 [2024-07-15 16:23:27.051815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:6363a763 cdw11:63636363 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.636 [2024-07-15 16:23:27.051840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.636 #48 NEW cov: 12139 ft: 15284 corp: 27/415b lim: 40 exec/s: 48 rss: 72Mb L: 13/36 MS: 1 ChangeBinInt- 00:06:47.636 [2024-07-15 16:23:27.091906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:63630a63 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.636 [2024-07-15 16:23:27.091930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.636 #49 NEW cov: 12139 ft: 15330 corp: 28/428b lim: 40 exec/s: 49 rss: 72Mb L: 13/36 MS: 1 ShuffleBytes- 00:06:47.636 [2024-07-15 16:23:27.142228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:6363635c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.636 [2024-07-15 16:23:27.142253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.636 [2024-07-15 16:23:27.142309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:4c5c5c5c cdw11:63636363 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.636 [2024-07-15 16:23:27.142323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.636 #50 NEW cov: 12139 ft: 15349 corp: 29/446b lim: 40 exec/s: 25 rss: 72Mb L: 18/36 MS: 1 ChangeBit- 00:06:47.636 #50 DONE cov: 12139 ft: 15349 corp: 29/446b lim: 40 exec/s: 25 rss: 72Mb 00:06:47.636 ###### Recommended dictionary. ###### 00:06:47.636 "\000\000\000\000" # Uses: 0 00:06:47.636 ###### End of recommended dictionary. ###### 00:06:47.636 Done 50 runs in 2 second(s) 00:06:47.895 16:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:06:47.895 16:23:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:47.895 16:23:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:47.895 16:23:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:06:47.895 16:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:06:47.895 16:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:47.895 16:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:47.895 16:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:47.895 16:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:06:47.895 16:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:47.895 16:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:47.895 16:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:06:47.895 16:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:06:47.895 16:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:47.896 16:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:06:47.896 16:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:47.896 16:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:47.896 16:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:47.896 16:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:06:47.896 [2024-07-15 16:23:27.338243] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:47.896 [2024-07-15 16:23:27.338338] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2028000 ] 00:06:47.896 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.155 [2024-07-15 16:23:27.519814] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.155 [2024-07-15 16:23:27.584964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.155 [2024-07-15 16:23:27.643903] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:48.155 [2024-07-15 16:23:27.660206] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:06:48.155 INFO: Running with entropic power schedule (0xFF, 100). 00:06:48.155 INFO: Seed: 2885447980 00:06:48.155 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:48.155 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:48.155 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:48.155 INFO: A corpus is not provided, starting from an empty corpus 00:06:48.155 #2 INITED exec/s: 0 rss: 63Mb 00:06:48.155 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:48.156 This may also happen if the target rejected all inputs we tried so far 00:06:48.156 [2024-07-15 16:23:27.708890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f5d00f3 cdw11:f3f30000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.156 [2024-07-15 16:23:27.708918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.724 NEW_FUNC[1/695]: 0x496390 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:06:48.724 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:48.724 #11 NEW cov: 11883 ft: 11883 corp: 2/9b lim: 40 exec/s: 0 rss: 70Mb L: 8/8 MS: 4 ChangeBinInt-CMP-ShuffleBytes-InsertRepeatedBytes- DE: "]\000\000\000"- 00:06:48.724 [2024-07-15 16:23:28.029654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f5d00f3 cdw11:e3f30000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.724 [2024-07-15 16:23:28.029685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.724 #17 NEW cov: 12013 ft: 12424 corp: 3/17b lim: 40 exec/s: 0 rss: 70Mb L: 8/8 MS: 1 ChangeBit- 00:06:48.724 [2024-07-15 16:23:28.079798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f5df3f3 cdw11:f300ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.724 [2024-07-15 16:23:28.079824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.724 [2024-07-15 16:23:28.079880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.724 [2024-07-15 16:23:28.079893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.724 #19 NEW cov: 12019 ft: 12976 corp: 4/40b lim: 40 exec/s: 0 rss: 70Mb L: 23/23 MS: 2 EraseBytes-InsertRepeatedBytes- 00:06:48.724 [2024-07-15 16:23:28.120181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f5d00f3 cdw11:e3f3ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.724 [2024-07-15 16:23:28.120209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.724 [2024-07-15 16:23:28.120281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.724 [2024-07-15 16:23:28.120295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.724 [2024-07-15 16:23:28.120348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.724 [2024-07-15 16:23:28.120361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.724 [2024-07-15 16:23:28.120412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.724 [2024-07-15 16:23:28.120426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.724 #20 NEW cov: 12104 ft: 13748 corp: 5/77b lim: 40 exec/s: 0 rss: 70Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:06:48.724 [2024-07-15 16:23:28.170305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f5d00f3 cdw11:e3f3ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.724 [2024-07-15 16:23:28.170329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.724 [2024-07-15 16:23:28.170384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.724 [2024-07-15 16:23:28.170397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.724 [2024-07-15 16:23:28.170452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.724 [2024-07-15 16:23:28.170466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.724 [2024-07-15 16:23:28.170520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.724 [2024-07-15 16:23:28.170532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.724 #21 NEW cov: 12104 ft: 13864 corp: 6/114b lim: 40 exec/s: 0 rss: 70Mb L: 37/37 MS: 1 ShuffleBytes- 00:06:48.724 [2024-07-15 16:23:28.220104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00f3e3f3 cdw11:e3f30000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.724 [2024-07-15 16:23:28.220128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.724 #22 NEW cov: 12104 ft: 13989 corp: 7/122b lim: 40 exec/s: 0 rss: 70Mb L: 8/37 MS: 1 CopyPart- 00:06:48.724 [2024-07-15 16:23:28.260199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f5d00f3 cdw11:f7f30000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.724 [2024-07-15 16:23:28.260224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.724 #23 NEW cov: 12104 ft: 14075 corp: 8/130b lim: 40 exec/s: 0 rss: 70Mb L: 8/37 MS: 1 ChangeBinInt- 00:06:48.724 [2024-07-15 16:23:28.300563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f5df3f3 cdw11:f300ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.724 [2024-07-15 16:23:28.300593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.724 [2024-07-15 16:23:28.300647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.724 [2024-07-15 16:23:28.300661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.724 [2024-07-15 16:23:28.300714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.724 [2024-07-15 16:23:28.300744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.983 #24 NEW cov: 12104 ft: 14304 corp: 9/156b lim: 40 exec/s: 0 rss: 70Mb L: 26/37 MS: 1 InsertRepeatedBytes- 00:06:48.983 [2024-07-15 16:23:28.350554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f5df3f3 cdw11:be00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.983 [2024-07-15 16:23:28.350580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.983 [2024-07-15 16:23:28.350633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.983 [2024-07-15 16:23:28.350646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.983 #25 NEW cov: 12104 ft: 14346 corp: 10/179b lim: 40 exec/s: 0 rss: 70Mb L: 23/37 MS: 1 ChangeByte- 00:06:48.983 [2024-07-15 16:23:28.390578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f5d00f3 cdw11:e300f300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.983 [2024-07-15 16:23:28.390603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.983 #26 NEW cov: 12104 ft: 14463 corp: 11/187b lim: 40 exec/s: 0 rss: 70Mb L: 8/37 MS: 1 ShuffleBytes- 00:06:48.983 [2024-07-15 16:23:28.430699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0fffffff cdw11:ff5d00f3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.983 [2024-07-15 16:23:28.430724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.983 #27 NEW cov: 12104 ft: 14467 corp: 12/199b lim: 40 exec/s: 0 rss: 70Mb L: 12/37 MS: 1 CMP- DE: "\377\377\377\377"- 00:06:48.983 [2024-07-15 16:23:28.470834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f5dfaf3 cdw11:e300f300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.983 [2024-07-15 16:23:28.470859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.983 #28 NEW cov: 12104 ft: 14527 corp: 13/207b lim: 40 exec/s: 0 rss: 70Mb L: 8/37 MS: 1 ChangeBinInt- 00:06:48.983 [2024-07-15 16:23:28.521076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0afcfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.983 [2024-07-15 16:23:28.521100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.983 [2024-07-15 16:23:28.521157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:fcfcfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.983 [2024-07-15 16:23:28.521170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.983 #29 NEW cov: 12104 ft: 14550 corp: 14/228b lim: 40 exec/s: 0 rss: 70Mb L: 21/37 MS: 1 InsertRepeatedBytes- 00:06:48.983 [2024-07-15 16:23:28.561046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0fffffff cdw11:ff0000f3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.983 [2024-07-15 16:23:28.561073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.242 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:49.242 #30 NEW cov: 12127 ft: 14585 corp: 15/240b lim: 40 exec/s: 0 rss: 71Mb L: 12/37 MS: 1 CrossOver- 00:06:49.242 [2024-07-15 16:23:28.611165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f5d00f3 cdw11:f3f3c1c1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.242 [2024-07-15 16:23:28.611190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.242 #31 NEW cov: 12127 ft: 14626 corp: 16/252b lim: 40 exec/s: 0 rss: 71Mb L: 12/37 MS: 1 InsertRepeatedBytes- 00:06:49.242 [2024-07-15 16:23:28.651419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f5d00f3 cdw11:f3f3c101 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.242 [2024-07-15 16:23:28.651448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.242 [2024-07-15 16:23:28.651504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2b1f7d08 cdw11:d82aa8c1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.242 [2024-07-15 16:23:28.651517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.242 #32 NEW cov: 12127 ft: 14641 corp: 17/272b lim: 40 exec/s: 0 rss: 71Mb L: 20/37 MS: 1 CMP- DE: "\001+\037}\010\330*\250"- 00:06:49.242 [2024-07-15 16:23:28.701803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f5d00f3 cdw11:e3f3ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.242 [2024-07-15 16:23:28.701827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.242 [2024-07-15 16:23:28.701883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.242 [2024-07-15 16:23:28.701897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.242 [2024-07-15 16:23:28.701950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff0100ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.242 [2024-07-15 16:23:28.701963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.243 [2024-07-15 16:23:28.702018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.243 [2024-07-15 16:23:28.702031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.243 #33 NEW cov: 12127 ft: 14654 corp: 18/309b lim: 40 exec/s: 33 rss: 71Mb L: 37/37 MS: 1 ChangeBinInt- 00:06:49.243 [2024-07-15 16:23:28.741578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f5d00f3 cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.243 [2024-07-15 16:23:28.741603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.243 #34 NEW cov: 12127 ft: 14713 corp: 19/317b lim: 40 exec/s: 34 rss: 71Mb L: 8/37 MS: 1 CrossOver- 00:06:49.243 [2024-07-15 16:23:28.791711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0fff00ff cdw11:fff3ff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.243 [2024-07-15 16:23:28.791736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.243 #35 NEW cov: 12127 ft: 14724 corp: 20/329b lim: 40 exec/s: 35 rss: 71Mb L: 12/37 MS: 1 ShuffleBytes- 00:06:49.502 [2024-07-15 16:23:28.841828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0fa300f3 cdw11:e300f300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.502 [2024-07-15 16:23:28.841853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.502 #36 NEW cov: 12127 ft: 14738 corp: 21/337b lim: 40 exec/s: 36 rss: 71Mb L: 8/37 MS: 1 ChangeBinInt- 00:06:49.502 [2024-07-15 16:23:28.881965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0fffffff cdw11:7f0000f3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.502 [2024-07-15 16:23:28.881990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.502 #37 NEW cov: 12127 ft: 14790 corp: 22/349b lim: 40 exec/s: 37 rss: 71Mb L: 12/37 MS: 1 ChangeBit- 00:06:49.502 [2024-07-15 16:23:28.922036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f5d00f3 cdw11:e3f300e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.502 [2024-07-15 16:23:28.922062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.502 #38 NEW cov: 12127 ft: 14802 corp: 23/357b lim: 40 exec/s: 38 rss: 71Mb L: 8/37 MS: 1 ChangeByte- 00:06:49.502 [2024-07-15 16:23:28.962303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f5df3f3 cdw11:be00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.502 [2024-07-15 16:23:28.962328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.502 [2024-07-15 16:23:28.962385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.502 [2024-07-15 16:23:28.962398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.502 #39 NEW cov: 12127 ft: 14822 corp: 24/380b lim: 40 exec/s: 39 rss: 71Mb L: 23/37 MS: 1 CrossOver- 00:06:49.502 [2024-07-15 16:23:29.012311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a8bb94b cdw11:3f7d1f2b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.502 [2024-07-15 16:23:29.012336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.502 #40 NEW cov: 12127 ft: 14831 corp: 25/389b lim: 40 exec/s: 40 rss: 71Mb L: 9/37 MS: 1 CMP- DE: "\213\271K?}\037+\000"- 00:06:49.502 [2024-07-15 16:23:29.052826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f5d00f3 cdw11:e3f3ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.502 [2024-07-15 16:23:29.052850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.502 [2024-07-15 16:23:29.052905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:5cffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.502 [2024-07-15 16:23:29.052919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.502 [2024-07-15 16:23:29.052973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.502 [2024-07-15 16:23:29.052986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.502 [2024-07-15 16:23:29.053042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.502 [2024-07-15 16:23:29.053055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.502 #41 NEW cov: 12127 ft: 14834 corp: 26/426b lim: 40 exec/s: 41 rss: 71Mb L: 37/37 MS: 1 ChangeByte- 00:06:49.502 [2024-07-15 16:23:29.092559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8bb94b3f cdw11:7d1f2b00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.502 [2024-07-15 16:23:29.092583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.761 #42 NEW cov: 12127 ft: 14838 corp: 27/434b lim: 40 exec/s: 42 rss: 71Mb L: 8/37 MS: 1 PersAutoDict- DE: "\213\271K?}\037+\000"- 00:06:49.761 [2024-07-15 16:23:29.142687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0fffffff cdw11:ff5d00f3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.761 [2024-07-15 16:23:29.142711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.761 #43 NEW cov: 12127 ft: 14848 corp: 28/446b lim: 40 exec/s: 43 rss: 71Mb L: 12/37 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:06:49.761 [2024-07-15 16:23:29.192831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0fa3ffff cdw11:fffff300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.761 [2024-07-15 16:23:29.192855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.761 #44 NEW cov: 12127 ft: 14852 corp: 29/454b lim: 40 exec/s: 44 rss: 71Mb L: 8/37 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:06:49.761 [2024-07-15 16:23:29.242972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f5dfa5d cdw11:000000f3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.761 [2024-07-15 16:23:29.242995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.761 #45 NEW cov: 12127 ft: 14859 corp: 30/466b lim: 40 exec/s: 45 rss: 71Mb L: 12/37 MS: 1 PersAutoDict- DE: "]\000\000\000"- 00:06:49.761 [2024-07-15 16:23:29.293217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f5d00f3 cdw11:e3f30001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.761 [2024-07-15 16:23:29.293242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.761 [2024-07-15 16:23:29.293300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2b1f7d08 cdw11:d82aa8c1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.761 [2024-07-15 16:23:29.293313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.761 #46 NEW cov: 12127 ft: 14871 corp: 31/486b lim: 40 exec/s: 46 rss: 72Mb L: 20/37 MS: 1 CrossOver- 00:06:49.761 [2024-07-15 16:23:29.343612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f5d00f3 cdw11:e3f3ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.761 [2024-07-15 16:23:29.343637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.761 [2024-07-15 16:23:29.343692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.761 [2024-07-15 16:23:29.343705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.761 [2024-07-15 16:23:29.343772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.762 [2024-07-15 16:23:29.343786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.762 [2024-07-15 16:23:29.343839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.762 [2024-07-15 16:23:29.343852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.021 #47 NEW cov: 12127 ft: 14880 corp: 32/523b lim: 40 exec/s: 47 rss: 72Mb L: 37/37 MS: 1 ChangeBinInt- 00:06:50.022 [2024-07-15 16:23:29.383516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0afcfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.022 [2024-07-15 16:23:29.383541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.022 [2024-07-15 16:23:29.383597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:fcfcfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.022 [2024-07-15 16:23:29.383611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.022 #48 NEW cov: 12127 ft: 14889 corp: 33/544b lim: 40 exec/s: 48 rss: 72Mb L: 21/37 MS: 1 ShuffleBytes- 00:06:50.022 [2024-07-15 16:23:29.433909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f5d00f3 cdw11:e3f3ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.022 [2024-07-15 16:23:29.433933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.022 [2024-07-15 16:23:29.433987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.022 [2024-07-15 16:23:29.434000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.022 [2024-07-15 16:23:29.434054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff0100ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.022 [2024-07-15 16:23:29.434067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.022 [2024-07-15 16:23:29.434121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:0f5dfaf3 cdw11:e300f300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.022 [2024-07-15 16:23:29.434134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.022 #49 NEW cov: 12127 ft: 14901 corp: 34/581b lim: 40 exec/s: 49 rss: 72Mb L: 37/37 MS: 1 CrossOver- 00:06:50.022 [2024-07-15 16:23:29.483665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0fffffff cdw11:012b1f7d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.022 [2024-07-15 16:23:29.483690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.022 #50 NEW cov: 12127 ft: 14951 corp: 35/593b lim: 40 exec/s: 50 rss: 72Mb L: 12/37 MS: 1 PersAutoDict- DE: "\001+\037}\010\330*\250"- 00:06:50.022 [2024-07-15 16:23:29.523756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0fffffff cdw11:ff5d00f3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.022 [2024-07-15 16:23:29.523780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.022 #51 NEW cov: 12127 ft: 14974 corp: 36/605b lim: 40 exec/s: 51 rss: 72Mb L: 12/37 MS: 1 ChangeBit- 00:06:50.022 [2024-07-15 16:23:29.564040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f000000 cdw11:005d00f3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.022 [2024-07-15 16:23:29.564065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.022 [2024-07-15 16:23:29.564120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:f3f3c1c1 cdw11:c1c10000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.022 [2024-07-15 16:23:29.564133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.022 #52 NEW cov: 12127 ft: 14976 corp: 37/621b lim: 40 exec/s: 52 rss: 72Mb L: 16/37 MS: 1 CMP- DE: "\000\000\000\000"- 00:06:50.022 [2024-07-15 16:23:29.604038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f5d10f3 cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.022 [2024-07-15 16:23:29.604062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.282 #53 NEW cov: 12127 ft: 15004 corp: 38/629b lim: 40 exec/s: 53 rss: 72Mb L: 8/37 MS: 1 ChangeBit- 00:06:50.282 [2024-07-15 16:23:29.654416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f5d00ff cdw11:fffffff3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.282 [2024-07-15 16:23:29.654440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.282 [2024-07-15 16:23:29.654516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:f3f3c101 cdw11:2b1f7d08 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.282 [2024-07-15 16:23:29.654530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.282 [2024-07-15 16:23:29.654587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:d82aa8c1 cdw11:c1c10000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.282 [2024-07-15 16:23:29.654601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.282 #54 NEW cov: 12127 ft: 15026 corp: 39/653b lim: 40 exec/s: 54 rss: 72Mb L: 24/37 MS: 1 InsertRepeatedBytes- 00:06:50.283 [2024-07-15 16:23:29.694638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f5d00f3 cdw11:e3f3ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.283 [2024-07-15 16:23:29.694663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.283 [2024-07-15 16:23:29.694720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.283 [2024-07-15 16:23:29.694733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.283 [2024-07-15 16:23:29.694786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff0100ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.283 [2024-07-15 16:23:29.694799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.283 [2024-07-15 16:23:29.694853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.283 [2024-07-15 16:23:29.694866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.283 #55 NEW cov: 12127 ft: 15034 corp: 40/690b lim: 40 exec/s: 27 rss: 72Mb L: 37/37 MS: 1 ShuffleBytes- 00:06:50.283 #55 DONE cov: 12127 ft: 15034 corp: 40/690b lim: 40 exec/s: 27 rss: 72Mb 00:06:50.283 ###### Recommended dictionary. ###### 00:06:50.283 "]\000\000\000" # Uses: 2 00:06:50.283 "\377\377\377\377" # Uses: 2 00:06:50.283 "\001+\037}\010\330*\250" # Uses: 1 00:06:50.283 "\213\271K?}\037+\000" # Uses: 1 00:06:50.283 "\000\000\000\000" # Uses: 0 00:06:50.283 ###### End of recommended dictionary. ###### 00:06:50.283 Done 55 runs in 2 second(s) 00:06:50.283 16:23:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:06:50.283 16:23:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:50.283 16:23:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:50.283 16:23:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:06:50.283 16:23:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:06:50.283 16:23:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:50.283 16:23:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:50.283 16:23:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:50.283 16:23:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:06:50.283 16:23:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:50.283 16:23:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:50.283 16:23:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:06:50.283 16:23:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:06:50.283 16:23:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:50.283 16:23:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:06:50.283 16:23:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:50.283 16:23:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:50.283 16:23:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:50.283 16:23:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:06:50.542 [2024-07-15 16:23:29.885982] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:50.542 [2024-07-15 16:23:29.886073] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2028300 ] 00:06:50.542 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.542 [2024-07-15 16:23:30.067902] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.802 [2024-07-15 16:23:30.142977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.802 [2024-07-15 16:23:30.201911] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:50.802 [2024-07-15 16:23:30.218217] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:06:50.802 INFO: Running with entropic power schedule (0xFF, 100). 00:06:50.802 INFO: Seed: 1150487707 00:06:50.802 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:50.802 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:50.802 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:50.802 INFO: A corpus is not provided, starting from an empty corpus 00:06:50.802 #2 INITED exec/s: 0 rss: 63Mb 00:06:50.802 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:50.802 This may also happen if the target rejected all inputs we tried so far 00:06:50.802 [2024-07-15 16:23:30.284424] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.802 [2024-07-15 16:23:30.284471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.802 [2024-07-15 16:23:30.284609] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.802 [2024-07-15 16:23:30.284634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.061 NEW_FUNC[1/696]: 0x497f50 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:06:51.061 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:51.061 #9 NEW cov: 11876 ft: 11875 corp: 2/15b lim: 35 exec/s: 0 rss: 70Mb L: 14/14 MS: 2 CrossOver-InsertRepeatedBytes- 00:06:51.061 [2024-07-15 16:23:30.615780] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.061 [2024-07-15 16:23:30.615821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.061 [2024-07-15 16:23:30.615951] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.061 [2024-07-15 16:23:30.615969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.061 [2024-07-15 16:23:30.616092] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000006e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.061 [2024-07-15 16:23:30.616109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.061 #15 NEW cov: 12014 ft: 12790 corp: 3/39b lim: 35 exec/s: 0 rss: 70Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:06:51.320 [2024-07-15 16:23:30.665590] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.320 [2024-07-15 16:23:30.665618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.320 [2024-07-15 16:23:30.665747] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.320 [2024-07-15 16:23:30.665764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.320 [2024-07-15 16:23:30.665893] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.320 [2024-07-15 16:23:30.665910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.320 [2024-07-15 16:23:30.666038] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.320 [2024-07-15 16:23:30.666056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.320 #17 NEW cov: 12020 ft: 13224 corp: 4/72b lim: 35 exec/s: 0 rss: 70Mb L: 33/33 MS: 2 InsertByte-InsertRepeatedBytes- 00:06:51.320 [2024-07-15 16:23:30.706103] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.320 [2024-07-15 16:23:30.706130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.320 [2024-07-15 16:23:30.706255] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.320 [2024-07-15 16:23:30.706273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.320 [2024-07-15 16:23:30.706400] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.320 [2024-07-15 16:23:30.706418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.321 [2024-07-15 16:23:30.706543] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.321 [2024-07-15 16:23:30.706561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.321 #18 NEW cov: 12105 ft: 13563 corp: 5/105b lim: 35 exec/s: 0 rss: 70Mb L: 33/33 MS: 1 ChangeByte- 00:06:51.321 [2024-07-15 16:23:30.755286] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.321 [2024-07-15 16:23:30.755315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.321 [2024-07-15 16:23:30.755448] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000fd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.321 [2024-07-15 16:23:30.755469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.321 #19 NEW cov: 12105 ft: 13709 corp: 6/119b lim: 35 exec/s: 0 rss: 70Mb L: 14/33 MS: 1 ChangeBinInt- 00:06:51.321 [2024-07-15 16:23:30.806003] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.321 [2024-07-15 16:23:30.806032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.321 NEW_FUNC[1/2]: 0x4b9410 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:06:51.321 NEW_FUNC[2/2]: 0x11f0900 in nvmf_ctrlr_set_features_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1765 00:06:51.321 #21 NEW cov: 12138 ft: 13988 corp: 7/133b lim: 35 exec/s: 0 rss: 70Mb L: 14/33 MS: 2 ShuffleBytes-CrossOver- 00:06:51.321 [2024-07-15 16:23:30.856636] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.321 [2024-07-15 16:23:30.856665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.321 [2024-07-15 16:23:30.856792] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.321 [2024-07-15 16:23:30.856812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.321 [2024-07-15 16:23:30.856941] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.321 [2024-07-15 16:23:30.856961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.321 [2024-07-15 16:23:30.857098] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.321 [2024-07-15 16:23:30.857117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.321 #22 NEW cov: 12138 ft: 14079 corp: 8/166b lim: 35 exec/s: 0 rss: 71Mb L: 33/33 MS: 1 ShuffleBytes- 00:06:51.580 [2024-07-15 16:23:30.916350] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.580 [2024-07-15 16:23:30.916382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.580 [2024-07-15 16:23:30.916519] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.580 [2024-07-15 16:23:30.916546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.580 #23 NEW cov: 12138 ft: 14125 corp: 9/180b lim: 35 exec/s: 0 rss: 71Mb L: 14/33 MS: 1 ShuffleBytes- 00:06:51.580 [2024-07-15 16:23:30.956223] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.580 [2024-07-15 16:23:30.956254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.580 [2024-07-15 16:23:30.956393] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.580 [2024-07-15 16:23:30.956423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.580 #24 NEW cov: 12138 ft: 14159 corp: 10/195b lim: 35 exec/s: 0 rss: 71Mb L: 15/33 MS: 1 InsertByte- 00:06:51.580 [2024-07-15 16:23:31.016187] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.580 [2024-07-15 16:23:31.016216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.580 #28 NEW cov: 12138 ft: 14764 corp: 11/207b lim: 35 exec/s: 0 rss: 71Mb L: 12/33 MS: 4 ShuffleBytes-ChangeByte-ShuffleBytes-InsertRepeatedBytes- 00:06:51.580 [2024-07-15 16:23:31.056612] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.580 [2024-07-15 16:23:31.056642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.580 [2024-07-15 16:23:31.056785] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.580 [2024-07-15 16:23:31.056808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.580 #29 NEW cov: 12138 ft: 14813 corp: 12/222b lim: 35 exec/s: 0 rss: 71Mb L: 15/33 MS: 1 InsertByte- 00:06:51.580 [2024-07-15 16:23:31.096942] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.580 [2024-07-15 16:23:31.096972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.580 [2024-07-15 16:23:31.097102] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.580 [2024-07-15 16:23:31.097120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.580 [2024-07-15 16:23:31.097257] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.580 [2024-07-15 16:23:31.097278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.580 #30 NEW cov: 12138 ft: 14867 corp: 13/243b lim: 35 exec/s: 0 rss: 71Mb L: 21/33 MS: 1 CopyPart- 00:06:51.580 [2024-07-15 16:23:31.157215] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.580 [2024-07-15 16:23:31.157244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.580 [2024-07-15 16:23:31.157382] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.580 [2024-07-15 16:23:31.157400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.580 [2024-07-15 16:23:31.157541] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.580 [2024-07-15 16:23:31.157560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.839 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:51.839 #31 NEW cov: 12161 ft: 14900 corp: 14/266b lim: 35 exec/s: 0 rss: 71Mb L: 23/33 MS: 1 CopyPart- 00:06:51.839 [2024-07-15 16:23:31.207132] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.839 [2024-07-15 16:23:31.207165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.839 [2024-07-15 16:23:31.207304] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000fd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.839 [2024-07-15 16:23:31.207327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.839 #32 NEW cov: 12161 ft: 14925 corp: 15/281b lim: 35 exec/s: 0 rss: 71Mb L: 15/33 MS: 1 InsertByte- 00:06:51.840 [2024-07-15 16:23:31.267337] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.840 [2024-07-15 16:23:31.267364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.840 #33 NEW cov: 12161 ft: 14933 corp: 16/295b lim: 35 exec/s: 33 rss: 71Mb L: 14/33 MS: 1 ChangeBit- 00:06:51.840 [2024-07-15 16:23:31.327549] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.840 [2024-07-15 16:23:31.327581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.840 #34 NEW cov: 12161 ft: 14942 corp: 17/313b lim: 35 exec/s: 34 rss: 71Mb L: 18/33 MS: 1 CMP- DE: "\035\001\000\000"- 00:06:51.840 [2024-07-15 16:23:31.387676] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.840 [2024-07-15 16:23:31.387710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.840 #35 NEW cov: 12161 ft: 14951 corp: 18/331b lim: 35 exec/s: 35 rss: 71Mb L: 18/33 MS: 1 ChangeBinInt- 00:06:52.099 [2024-07-15 16:23:31.448144] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.099 [2024-07-15 16:23:31.448176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.099 [2024-07-15 16:23:31.448320] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.099 [2024-07-15 16:23:31.448341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.099 [2024-07-15 16:23:31.448478] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000006e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.099 [2024-07-15 16:23:31.448497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.099 #36 NEW cov: 12161 ft: 14974 corp: 19/355b lim: 35 exec/s: 36 rss: 71Mb L: 24/33 MS: 1 ChangeBit- 00:06:52.099 [2024-07-15 16:23:31.508084] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.099 [2024-07-15 16:23:31.508115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.099 #37 NEW cov: 12161 ft: 14998 corp: 20/374b lim: 35 exec/s: 37 rss: 72Mb L: 19/33 MS: 1 CrossOver- 00:06:52.099 [2024-07-15 16:23:31.568209] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.099 [2024-07-15 16:23:31.568243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.099 [2024-07-15 16:23:31.568378] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.099 [2024-07-15 16:23:31.568401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.099 #38 NEW cov: 12161 ft: 15022 corp: 21/389b lim: 35 exec/s: 38 rss: 72Mb L: 15/33 MS: 1 ChangeByte- 00:06:52.099 [2024-07-15 16:23:31.628502] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.099 [2024-07-15 16:23:31.628538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.099 #39 NEW cov: 12161 ft: 15033 corp: 22/407b lim: 35 exec/s: 39 rss: 72Mb L: 18/33 MS: 1 CopyPart- 00:06:52.099 [2024-07-15 16:23:31.678771] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.099 [2024-07-15 16:23:31.678803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.099 [2024-07-15 16:23:31.678936] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.099 [2024-07-15 16:23:31.678957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.099 [2024-07-15 16:23:31.679089] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000006e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.099 [2024-07-15 16:23:31.679108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.359 #40 NEW cov: 12161 ft: 15066 corp: 23/431b lim: 35 exec/s: 40 rss: 72Mb L: 24/33 MS: 1 ChangeBit- 00:06:52.359 [2024-07-15 16:23:31.738748] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.359 [2024-07-15 16:23:31.738776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.359 [2024-07-15 16:23:31.738912] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.359 [2024-07-15 16:23:31.738930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.359 #41 NEW cov: 12161 ft: 15086 corp: 24/449b lim: 35 exec/s: 41 rss: 72Mb L: 18/33 MS: 1 EraseBytes- 00:06:52.359 [2024-07-15 16:23:31.799427] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.359 [2024-07-15 16:23:31.799458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.359 [2024-07-15 16:23:31.799605] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.359 [2024-07-15 16:23:31.799623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.359 [2024-07-15 16:23:31.799760] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.359 [2024-07-15 16:23:31.799780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.359 [2024-07-15 16:23:31.799914] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.360 [2024-07-15 16:23:31.799931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.360 #42 NEW cov: 12161 ft: 15136 corp: 25/482b lim: 35 exec/s: 42 rss: 72Mb L: 33/33 MS: 1 PersAutoDict- DE: "\035\001\000\000"- 00:06:52.360 [2024-07-15 16:23:31.849066] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.360 [2024-07-15 16:23:31.849098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.360 [2024-07-15 16:23:31.849239] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.360 [2024-07-15 16:23:31.849258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.360 #43 NEW cov: 12161 ft: 15141 corp: 26/500b lim: 35 exec/s: 43 rss: 72Mb L: 18/33 MS: 1 CMP- DE: "\377\377\377~"- 00:06:52.360 [2024-07-15 16:23:31.899789] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.360 [2024-07-15 16:23:31.899817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.360 [2024-07-15 16:23:31.899956] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.360 [2024-07-15 16:23:31.899979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.360 [2024-07-15 16:23:31.900111] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.360 [2024-07-15 16:23:31.900132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.360 [2024-07-15 16:23:31.900253] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.360 [2024-07-15 16:23:31.900272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.360 #44 NEW cov: 12161 ft: 15151 corp: 27/530b lim: 35 exec/s: 44 rss: 72Mb L: 30/33 MS: 1 EraseBytes- 00:06:52.619 [2024-07-15 16:23:31.959297] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.619 [2024-07-15 16:23:31.959327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.619 [2024-07-15 16:23:31.959457] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.619 [2024-07-15 16:23:31.959475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.619 #45 NEW cov: 12161 ft: 15184 corp: 28/549b lim: 35 exec/s: 45 rss: 72Mb L: 19/33 MS: 1 EraseBytes- 00:06:52.619 [2024-07-15 16:23:32.019543] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.619 [2024-07-15 16:23:32.019574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.619 [2024-07-15 16:23:32.019715] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.619 [2024-07-15 16:23:32.019739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.619 #46 NEW cov: 12161 ft: 15203 corp: 29/564b lim: 35 exec/s: 46 rss: 72Mb L: 15/33 MS: 1 ChangeByte- 00:06:52.619 [2024-07-15 16:23:32.080079] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.619 [2024-07-15 16:23:32.080109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.619 [2024-07-15 16:23:32.080247] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.619 [2024-07-15 16:23:32.080265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.619 [2024-07-15 16:23:32.080404] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.619 [2024-07-15 16:23:32.080423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.619 #47 NEW cov: 12161 ft: 15215 corp: 30/590b lim: 35 exec/s: 47 rss: 73Mb L: 26/33 MS: 1 InsertRepeatedBytes- 00:06:52.619 [2024-07-15 16:23:32.119888] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.619 [2024-07-15 16:23:32.119918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.619 [2024-07-15 16:23:32.120059] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.619 [2024-07-15 16:23:32.120080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.619 #48 NEW cov: 12161 ft: 15239 corp: 31/606b lim: 35 exec/s: 48 rss: 73Mb L: 16/33 MS: 1 InsertByte- 00:06:52.620 [2024-07-15 16:23:32.170088] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.620 [2024-07-15 16:23:32.170116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.620 [2024-07-15 16:23:32.170248] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.620 [2024-07-15 16:23:32.170272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.620 #49 NEW cov: 12161 ft: 15268 corp: 32/620b lim: 35 exec/s: 49 rss: 73Mb L: 14/33 MS: 1 ChangeBit- 00:06:52.880 [2024-07-15 16:23:32.221129] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.880 [2024-07-15 16:23:32.221157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.880 [2024-07-15 16:23:32.221288] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.880 [2024-07-15 16:23:32.221311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.880 [2024-07-15 16:23:32.221446] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000085 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.880 [2024-07-15 16:23:32.221487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.880 [2024-07-15 16:23:32.221615] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000006e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.880 [2024-07-15 16:23:32.221632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.880 [2024-07-15 16:23:32.221764] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:8000006e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.880 [2024-07-15 16:23:32.221784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:52.880 #50 NEW cov: 12161 ft: 15355 corp: 33/655b lim: 35 exec/s: 50 rss: 73Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:06:52.880 [2024-07-15 16:23:32.260162] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.880 [2024-07-15 16:23:32.260190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.880 [2024-07-15 16:23:32.260332] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.880 [2024-07-15 16:23:32.260350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.880 #51 NEW cov: 12161 ft: 15366 corp: 34/673b lim: 35 exec/s: 25 rss: 73Mb L: 18/35 MS: 1 ChangeByte- 00:06:52.880 #51 DONE cov: 12161 ft: 15366 corp: 34/673b lim: 35 exec/s: 25 rss: 73Mb 00:06:52.880 ###### Recommended dictionary. ###### 00:06:52.880 "\035\001\000\000" # Uses: 1 00:06:52.880 "\377\377\377~" # Uses: 0 00:06:52.880 ###### End of recommended dictionary. ###### 00:06:52.880 Done 51 runs in 2 second(s) 00:06:52.880 16:23:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:06:52.880 16:23:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:52.880 16:23:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:52.880 16:23:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:06:52.880 16:23:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:06:52.880 16:23:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:52.880 16:23:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:52.880 16:23:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:52.880 16:23:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:06:52.880 16:23:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:52.880 16:23:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:52.880 16:23:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:06:52.880 16:23:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:06:52.880 16:23:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:52.880 16:23:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:06:52.880 16:23:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:52.880 16:23:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:52.880 16:23:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:52.880 16:23:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:06:52.880 [2024-07-15 16:23:32.463236] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:52.880 [2024-07-15 16:23:32.463302] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2028826 ] 00:06:53.139 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.139 [2024-07-15 16:23:32.638513] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.139 [2024-07-15 16:23:32.703412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.399 [2024-07-15 16:23:32.762505] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.399 [2024-07-15 16:23:32.778769] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:06:53.399 INFO: Running with entropic power schedule (0xFF, 100). 00:06:53.399 INFO: Seed: 3711480664 00:06:53.399 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:53.399 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:53.399 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:53.399 INFO: A corpus is not provided, starting from an empty corpus 00:06:53.399 #2 INITED exec/s: 0 rss: 64Mb 00:06:53.399 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:53.399 This may also happen if the target rejected all inputs we tried so far 00:06:53.399 [2024-07-15 16:23:32.823704] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.399 [2024-07-15 16:23:32.823743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.399 [2024-07-15 16:23:32.823779] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.399 [2024-07-15 16:23:32.823795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.399 [2024-07-15 16:23:32.823826] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.399 [2024-07-15 16:23:32.823841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.399 [2024-07-15 16:23:32.823871] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.399 [2024-07-15 16:23:32.823886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:53.658 NEW_FUNC[1/696]: 0x499490 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:06:53.658 NEW_FUNC[2/696]: 0x4b9410 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:06:53.658 #4 NEW cov: 11879 ft: 11879 corp: 2/36b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:53.658 [2024-07-15 16:23:33.164365] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000003f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.658 [2024-07-15 16:23:33.164404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.658 [2024-07-15 16:23:33.164463] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.658 [2024-07-15 16:23:33.164479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.658 [2024-07-15 16:23:33.164510] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.658 [2024-07-15 16:23:33.164525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.658 #8 NEW cov: 12009 ft: 12928 corp: 3/58b lim: 35 exec/s: 0 rss: 71Mb L: 22/35 MS: 4 InsertByte-ShuffleBytes-CMP-InsertRepeatedBytes- DE: "\377\377\377\004"- 00:06:53.658 [2024-07-15 16:23:33.224564] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.658 [2024-07-15 16:23:33.224595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.658 [2024-07-15 16:23:33.224630] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.658 [2024-07-15 16:23:33.224645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.659 [2024-07-15 16:23:33.224676] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.659 [2024-07-15 16:23:33.224691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.659 [2024-07-15 16:23:33.224721] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.659 [2024-07-15 16:23:33.224737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:53.918 #9 NEW cov: 12015 ft: 13144 corp: 4/93b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 ChangeBit- 00:06:53.918 [2024-07-15 16:23:33.304748] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.918 [2024-07-15 16:23:33.304793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.918 [2024-07-15 16:23:33.304827] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.918 [2024-07-15 16:23:33.304843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.918 [2024-07-15 16:23:33.304873] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.918 [2024-07-15 16:23:33.304890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.918 [2024-07-15 16:23:33.304919] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.918 [2024-07-15 16:23:33.304934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:53.918 #10 NEW cov: 12100 ft: 13386 corp: 5/128b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 ShuffleBytes- 00:06:53.918 [2024-07-15 16:23:33.384947] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.918 [2024-07-15 16:23:33.384977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.918 [2024-07-15 16:23:33.385026] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.918 [2024-07-15 16:23:33.385041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.918 [2024-07-15 16:23:33.385071] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.918 [2024-07-15 16:23:33.385086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.918 [2024-07-15 16:23:33.385116] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.918 [2024-07-15 16:23:33.385130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:53.918 #11 NEW cov: 12100 ft: 13504 corp: 6/163b lim: 35 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 CopyPart- 00:06:53.918 [2024-07-15 16:23:33.465129] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.918 [2024-07-15 16:23:33.465159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.918 [2024-07-15 16:23:33.465208] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.918 [2024-07-15 16:23:33.465223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.918 [2024-07-15 16:23:33.465253] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.918 [2024-07-15 16:23:33.465268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.918 [2024-07-15 16:23:33.465297] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.918 [2024-07-15 16:23:33.465312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:53.918 #17 NEW cov: 12100 ft: 13656 corp: 7/198b lim: 35 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 CopyPart- 00:06:54.177 [2024-07-15 16:23:33.515105] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000013f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.177 [2024-07-15 16:23:33.515137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.177 #21 NEW cov: 12100 ft: 14079 corp: 8/205b lim: 35 exec/s: 0 rss: 72Mb L: 7/35 MS: 4 CrossOver-CopyPart-PersAutoDict-InsertByte- DE: "\377\377\377\004"- 00:06:54.177 NEW_FUNC[1/1]: 0x4b6ee0 in feat_number_of_queues /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:318 00:06:54.177 #22 NEW cov: 12132 ft: 14143 corp: 9/212b lim: 35 exec/s: 0 rss: 72Mb L: 7/35 MS: 1 ChangeBinInt- 00:06:54.177 [2024-07-15 16:23:33.675581] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000003f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.177 [2024-07-15 16:23:33.675611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.177 [2024-07-15 16:23:33.675659] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.177 [2024-07-15 16:23:33.675674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.177 [2024-07-15 16:23:33.675704] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.177 [2024-07-15 16:23:33.675719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.177 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:54.177 #23 NEW cov: 12149 ft: 14178 corp: 10/238b lim: 35 exec/s: 0 rss: 72Mb L: 26/35 MS: 1 PersAutoDict- DE: "\377\377\377\004"- 00:06:54.177 [2024-07-15 16:23:33.735857] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.177 [2024-07-15 16:23:33.735886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.177 [2024-07-15 16:23:33.735934] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.177 [2024-07-15 16:23:33.735950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.177 [2024-07-15 16:23:33.735980] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.177 [2024-07-15 16:23:33.735995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.177 [2024-07-15 16:23:33.736025] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.177 [2024-07-15 16:23:33.736040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:54.177 #24 NEW cov: 12149 ft: 14223 corp: 11/273b lim: 35 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 CopyPart- 00:06:54.436 [2024-07-15 16:23:33.785841] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.436 [2024-07-15 16:23:33.785872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.436 #25 NEW cov: 12149 ft: 14317 corp: 12/289b lim: 35 exec/s: 25 rss: 72Mb L: 16/35 MS: 1 InsertRepeatedBytes- 00:06:54.436 [2024-07-15 16:23:33.846059] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.436 [2024-07-15 16:23:33.846089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.436 #26 NEW cov: 12149 ft: 14436 corp: 13/306b lim: 35 exec/s: 26 rss: 72Mb L: 17/35 MS: 1 InsertByte- 00:06:54.436 [2024-07-15 16:23:33.926967] ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 12 00:06:54.436 [2024-07-15 16:23:33.927356] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.436 [2024-07-15 16:23:33.927389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.436 [2024-07-15 16:23:33.927456] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.436 [2024-07-15 16:23:33.927473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.436 [2024-07-15 16:23:33.927533] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:7 cdw10:00000104 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.436 [2024-07-15 16:23:33.927549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.436 [2024-07-15 16:23:33.927611] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.436 [2024-07-15 16:23:33.927627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:54.436 NEW_FUNC[1/3]: 0x4b4bb0 in feat_temperature_threshold /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:295 00:06:54.436 NEW_FUNC[2/3]: 0x11dea50 in nvmf_ctrlr_get_features_temperature_threshold /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1686 00:06:54.436 #27 NEW cov: 12209 ft: 14655 corp: 14/341b lim: 35 exec/s: 27 rss: 72Mb L: 35/35 MS: 1 PersAutoDict- DE: "\377\377\377\004"- 00:06:54.436 [2024-07-15 16:23:33.966942] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000013f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.436 [2024-07-15 16:23:33.966967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.436 [2024-07-15 16:23:33.967026] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.436 [2024-07-15 16:23:33.967040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.436 #28 NEW cov: 12209 ft: 14819 corp: 15/355b lim: 35 exec/s: 28 rss: 72Mb L: 14/35 MS: 1 CrossOver- 00:06:54.436 [2024-07-15 16:23:34.006960] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000013f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.436 [2024-07-15 16:23:34.006984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.436 #29 NEW cov: 12209 ft: 14911 corp: 16/363b lim: 35 exec/s: 29 rss: 72Mb L: 8/35 MS: 1 CopyPart- 00:06:54.695 [2024-07-15 16:23:34.047575] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.695 [2024-07-15 16:23:34.047600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.695 [2024-07-15 16:23:34.047659] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000019e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.695 [2024-07-15 16:23:34.047672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.695 [2024-07-15 16:23:34.047728] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.695 [2024-07-15 16:23:34.047740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.695 [2024-07-15 16:23:34.047800] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.695 [2024-07-15 16:23:34.047813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:54.695 #30 NEW cov: 12209 ft: 14936 corp: 17/398b lim: 35 exec/s: 30 rss: 72Mb L: 35/35 MS: 1 ChangeByte- 00:06:54.695 #31 NEW cov: 12209 ft: 14954 corp: 18/406b lim: 35 exec/s: 31 rss: 72Mb L: 8/35 MS: 1 InsertByte- 00:06:54.695 [2024-07-15 16:23:34.137860] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.695 [2024-07-15 16:23:34.137885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.695 [2024-07-15 16:23:34.137942] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.695 [2024-07-15 16:23:34.137955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.695 [2024-07-15 16:23:34.138010] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.695 [2024-07-15 16:23:34.138023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.695 [2024-07-15 16:23:34.138079] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.695 [2024-07-15 16:23:34.138093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:54.695 #32 NEW cov: 12209 ft: 14962 corp: 19/441b lim: 35 exec/s: 32 rss: 72Mb L: 35/35 MS: 1 ShuffleBytes- 00:06:54.695 [2024-07-15 16:23:34.177534] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000023f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.695 [2024-07-15 16:23:34.177558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.695 [2024-07-15 16:23:34.177620] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000002b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.695 [2024-07-15 16:23:34.177633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.695 #35 NEW cov: 12209 ft: 14994 corp: 20/455b lim: 35 exec/s: 35 rss: 72Mb L: 14/35 MS: 3 EraseBytes-InsertByte-CMP- DE: "\001+\037\200>\207\223\252"- 00:06:54.695 [2024-07-15 16:23:34.218101] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.695 [2024-07-15 16:23:34.218125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.695 [2024-07-15 16:23:34.218197] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.695 [2024-07-15 16:23:34.218211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.695 [2024-07-15 16:23:34.218268] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.695 [2024-07-15 16:23:34.218281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.695 [2024-07-15 16:23:34.218341] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.695 [2024-07-15 16:23:34.218355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:54.695 #36 NEW cov: 12209 ft: 15040 corp: 21/490b lim: 35 exec/s: 36 rss: 72Mb L: 35/35 MS: 1 ChangeBit- 00:06:54.695 [2024-07-15 16:23:34.268057] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000003f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.695 [2024-07-15 16:23:34.268082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.695 [2024-07-15 16:23:34.268139] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.695 [2024-07-15 16:23:34.268152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.695 [2024-07-15 16:23:34.268223] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.695 [2024-07-15 16:23:34.268237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.695 [2024-07-15 16:23:34.268295] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.695 [2024-07-15 16:23:34.268308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.955 #37 NEW cov: 12209 ft: 15140 corp: 22/521b lim: 35 exec/s: 37 rss: 72Mb L: 31/35 MS: 1 InsertRepeatedBytes- 00:06:54.955 [2024-07-15 16:23:34.318044] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000003f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.955 [2024-07-15 16:23:34.318068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.955 [2024-07-15 16:23:34.318127] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.955 [2024-07-15 16:23:34.318140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.955 [2024-07-15 16:23:34.318195] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.955 [2024-07-15 16:23:34.318208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.955 #38 NEW cov: 12209 ft: 15157 corp: 23/547b lim: 35 exec/s: 38 rss: 72Mb L: 26/35 MS: 1 ChangeBinInt- 00:06:54.955 [2024-07-15 16:23:34.358476] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.955 [2024-07-15 16:23:34.358500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.955 [2024-07-15 16:23:34.358560] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.955 [2024-07-15 16:23:34.358573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.955 [2024-07-15 16:23:34.358631] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.955 [2024-07-15 16:23:34.358644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.955 [2024-07-15 16:23:34.358699] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.955 [2024-07-15 16:23:34.358712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:54.955 [2024-07-15 16:23:34.388484] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.955 [2024-07-15 16:23:34.388507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.955 [2024-07-15 16:23:34.388568] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.955 [2024-07-15 16:23:34.388581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.955 [2024-07-15 16:23:34.388637] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.955 [2024-07-15 16:23:34.388650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.955 #40 NEW cov: 12209 ft: 15203 corp: 24/579b lim: 35 exec/s: 40 rss: 72Mb L: 32/35 MS: 2 CopyPart-EraseBytes- 00:06:54.955 #41 NEW cov: 12209 ft: 15224 corp: 25/588b lim: 35 exec/s: 41 rss: 72Mb L: 9/35 MS: 1 InsertByte- 00:06:54.955 [2024-07-15 16:23:34.478890] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.955 [2024-07-15 16:23:34.478915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.955 [2024-07-15 16:23:34.478972] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.955 [2024-07-15 16:23:34.478986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.955 [2024-07-15 16:23:34.479044] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.955 [2024-07-15 16:23:34.479058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.955 [2024-07-15 16:23:34.479114] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.955 [2024-07-15 16:23:34.479127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:54.955 #42 NEW cov: 12209 ft: 15257 corp: 26/623b lim: 35 exec/s: 42 rss: 72Mb L: 35/35 MS: 1 ShuffleBytes- 00:06:54.955 [2024-07-15 16:23:34.528861] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.955 [2024-07-15 16:23:34.528887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.955 [2024-07-15 16:23:34.528946] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.955 [2024-07-15 16:23:34.528959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.955 [2024-07-15 16:23:34.529032] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.955 [2024-07-15 16:23:34.529046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.214 #43 NEW cov: 12209 ft: 15295 corp: 27/656b lim: 35 exec/s: 43 rss: 72Mb L: 33/35 MS: 1 InsertByte- 00:06:55.214 [2024-07-15 16:23:34.579197] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.214 [2024-07-15 16:23:34.579223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.214 [2024-07-15 16:23:34.579284] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.214 [2024-07-15 16:23:34.579298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.214 [2024-07-15 16:23:34.579356] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.214 [2024-07-15 16:23:34.579372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.214 [2024-07-15 16:23:34.579432] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.214 [2024-07-15 16:23:34.579452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:55.214 #44 NEW cov: 12209 ft: 15317 corp: 28/691b lim: 35 exec/s: 44 rss: 72Mb L: 35/35 MS: 1 ShuffleBytes- 00:06:55.214 #45 NEW cov: 12209 ft: 15324 corp: 29/699b lim: 35 exec/s: 45 rss: 73Mb L: 8/35 MS: 1 EraseBytes- 00:06:55.214 [2024-07-15 16:23:34.679054] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000013f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.214 [2024-07-15 16:23:34.679079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.214 [2024-07-15 16:23:34.679154] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.214 [2024-07-15 16:23:34.679168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.214 #46 NEW cov: 12216 ft: 15334 corp: 30/713b lim: 35 exec/s: 46 rss: 73Mb L: 14/35 MS: 1 ChangeByte- 00:06:55.214 [2024-07-15 16:23:34.729527] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.214 [2024-07-15 16:23:34.729552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.214 [2024-07-15 16:23:34.729627] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.214 [2024-07-15 16:23:34.729640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.214 [2024-07-15 16:23:34.729700] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.214 [2024-07-15 16:23:34.729714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.214 [2024-07-15 16:23:34.729770] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.214 [2024-07-15 16:23:34.729783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:55.214 #47 NEW cov: 12216 ft: 15342 corp: 31/748b lim: 35 exec/s: 47 rss: 73Mb L: 35/35 MS: 1 ChangeByte- 00:06:55.214 [2024-07-15 16:23:34.779494] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000003f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.214 [2024-07-15 16:23:34.779520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.214 [2024-07-15 16:23:34.779595] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.214 [2024-07-15 16:23:34.779608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.214 [2024-07-15 16:23:34.779666] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000041f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.214 [2024-07-15 16:23:34.779680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.214 [2024-07-15 16:23:34.779738] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.214 [2024-07-15 16:23:34.779751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.214 #48 NEW cov: 12216 ft: 15348 corp: 32/782b lim: 35 exec/s: 48 rss: 73Mb L: 34/35 MS: 1 PersAutoDict- DE: "\001+\037\200>\207\223\252"- 00:06:55.474 [2024-07-15 16:23:34.819828] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.474 [2024-07-15 16:23:34.819854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.474 [2024-07-15 16:23:34.819927] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.474 [2024-07-15 16:23:34.819941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.474 [2024-07-15 16:23:34.819998] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.474 [2024-07-15 16:23:34.820010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.474 [2024-07-15 16:23:34.820068] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000133 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.474 [2024-07-15 16:23:34.820082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:55.474 #49 NEW cov: 12216 ft: 15352 corp: 33/817b lim: 35 exec/s: 24 rss: 73Mb L: 35/35 MS: 1 CopyPart- 00:06:55.474 #49 DONE cov: 12216 ft: 15352 corp: 33/817b lim: 35 exec/s: 24 rss: 73Mb 00:06:55.474 ###### Recommended dictionary. ###### 00:06:55.474 "\377\377\377\004" # Uses: 4 00:06:55.474 "\001+\037\200>\207\223\252" # Uses: 1 00:06:55.474 ###### End of recommended dictionary. ###### 00:06:55.474 Done 49 runs in 2 second(s) 00:06:55.474 16:23:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:06:55.474 16:23:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:55.474 16:23:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:55.474 16:23:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:06:55.474 16:23:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:06:55.474 16:23:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:55.474 16:23:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:55.474 16:23:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:55.474 16:23:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:06:55.474 16:23:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:55.474 16:23:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:55.474 16:23:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:06:55.474 16:23:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:06:55.474 16:23:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:55.474 16:23:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:06:55.474 16:23:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:55.474 16:23:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:55.474 16:23:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:55.474 16:23:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:06:55.474 [2024-07-15 16:23:35.005020] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:55.474 [2024-07-15 16:23:35.005092] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2029334 ] 00:06:55.474 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.733 [2024-07-15 16:23:35.178710] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.733 [2024-07-15 16:23:35.243730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.733 [2024-07-15 16:23:35.302451] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:55.733 [2024-07-15 16:23:35.318745] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:06:55.992 INFO: Running with entropic power schedule (0xFF, 100). 00:06:55.992 INFO: Seed: 1955536609 00:06:55.992 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:55.992 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:55.992 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:55.992 INFO: A corpus is not provided, starting from an empty corpus 00:06:55.992 #2 INITED exec/s: 0 rss: 65Mb 00:06:55.992 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:55.992 This may also happen if the target rejected all inputs we tried so far 00:06:55.992 [2024-07-15 16:23:35.364085] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.992 [2024-07-15 16:23:35.364116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.992 [2024-07-15 16:23:35.364152] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.992 [2024-07-15 16:23:35.364167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.992 [2024-07-15 16:23:35.364218] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.992 [2024-07-15 16:23:35.364232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.992 [2024-07-15 16:23:35.364284] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.992 [2024-07-15 16:23:35.364298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.251 NEW_FUNC[1/696]: 0x49a940 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:06:56.251 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:56.251 #8 NEW cov: 11953 ft: 11954 corp: 2/93b lim: 105 exec/s: 0 rss: 71Mb L: 92/92 MS: 1 InsertRepeatedBytes- 00:06:56.252 [2024-07-15 16:23:35.695771] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:3482 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.252 [2024-07-15 16:23:35.695821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.252 [2024-07-15 16:23:35.695957] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.252 [2024-07-15 16:23:35.695986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.252 [2024-07-15 16:23:35.696119] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.252 [2024-07-15 16:23:35.696153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.252 [2024-07-15 16:23:35.696281] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.252 [2024-07-15 16:23:35.696311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.252 #14 NEW cov: 12099 ft: 12766 corp: 3/185b lim: 105 exec/s: 0 rss: 71Mb L: 92/92 MS: 1 CMP- DE: "\001\015"- 00:06:56.252 [2024-07-15 16:23:35.755896] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:3482 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.252 [2024-07-15 16:23:35.755931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.252 [2024-07-15 16:23:35.756068] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.252 [2024-07-15 16:23:35.756093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.252 [2024-07-15 16:23:35.756211] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.252 [2024-07-15 16:23:35.756237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.252 [2024-07-15 16:23:35.756359] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.252 [2024-07-15 16:23:35.756381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.252 #15 NEW cov: 12105 ft: 12904 corp: 4/277b lim: 105 exec/s: 0 rss: 71Mb L: 92/92 MS: 1 ChangeByte- 00:06:56.252 [2024-07-15 16:23:35.806047] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.252 [2024-07-15 16:23:35.806083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.252 [2024-07-15 16:23:35.806215] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.252 [2024-07-15 16:23:35.806242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.252 [2024-07-15 16:23:35.806364] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.252 [2024-07-15 16:23:35.806387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.252 [2024-07-15 16:23:35.806527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11068046444225206681 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.252 [2024-07-15 16:23:35.806554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.252 #16 NEW cov: 12190 ft: 13138 corp: 5/369b lim: 105 exec/s: 0 rss: 71Mb L: 92/92 MS: 1 ChangeBit- 00:06:56.512 [2024-07-15 16:23:35.846132] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.512 [2024-07-15 16:23:35.846167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.512 [2024-07-15 16:23:35.846255] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.512 [2024-07-15 16:23:35.846285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.512 [2024-07-15 16:23:35.846408] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.512 [2024-07-15 16:23:35.846435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.512 [2024-07-15 16:23:35.846560] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.512 [2024-07-15 16:23:35.846582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.512 #17 NEW cov: 12190 ft: 13327 corp: 6/461b lim: 105 exec/s: 0 rss: 71Mb L: 92/92 MS: 1 ChangeBit- 00:06:56.512 [2024-07-15 16:23:35.886282] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.512 [2024-07-15 16:23:35.886316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.512 [2024-07-15 16:23:35.886418] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.512 [2024-07-15 16:23:35.886446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.512 [2024-07-15 16:23:35.886568] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.512 [2024-07-15 16:23:35.886592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.512 [2024-07-15 16:23:35.886715] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.512 [2024-07-15 16:23:35.886735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.512 #18 NEW cov: 12190 ft: 13399 corp: 7/553b lim: 105 exec/s: 0 rss: 71Mb L: 92/92 MS: 1 ChangeBinInt- 00:06:56.512 [2024-07-15 16:23:35.926374] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:3482 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.512 [2024-07-15 16:23:35.926407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.512 [2024-07-15 16:23:35.926508] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.512 [2024-07-15 16:23:35.926534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.512 [2024-07-15 16:23:35.926650] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.512 [2024-07-15 16:23:35.926671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.512 [2024-07-15 16:23:35.926798] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.512 [2024-07-15 16:23:35.926822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.512 #19 NEW cov: 12190 ft: 13460 corp: 8/645b lim: 105 exec/s: 0 rss: 71Mb L: 92/92 MS: 1 CopyPart- 00:06:56.512 [2024-07-15 16:23:35.976425] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:3482 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.512 [2024-07-15 16:23:35.976462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.512 [2024-07-15 16:23:35.976580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11067926597458303385 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.512 [2024-07-15 16:23:35.976603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.512 [2024-07-15 16:23:35.976725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.512 [2024-07-15 16:23:35.976749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.512 [2024-07-15 16:23:35.976858] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.512 [2024-07-15 16:23:35.976877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.512 #20 NEW cov: 12190 ft: 13556 corp: 9/737b lim: 105 exec/s: 0 rss: 71Mb L: 92/92 MS: 1 ChangeByte- 00:06:56.512 [2024-07-15 16:23:36.016599] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:3482 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.512 [2024-07-15 16:23:36.016634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.512 [2024-07-15 16:23:36.016762] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.512 [2024-07-15 16:23:36.016784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.512 [2024-07-15 16:23:36.016905] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.512 [2024-07-15 16:23:36.016927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.512 [2024-07-15 16:23:36.017044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.512 [2024-07-15 16:23:36.017069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.512 #21 NEW cov: 12190 ft: 13584 corp: 10/829b lim: 105 exec/s: 0 rss: 72Mb L: 92/92 MS: 1 ShuffleBytes- 00:06:56.512 [2024-07-15 16:23:36.066808] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:3482 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.512 [2024-07-15 16:23:36.066845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.512 [2024-07-15 16:23:36.066967] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.512 [2024-07-15 16:23:36.066993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.512 [2024-07-15 16:23:36.067111] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.512 [2024-07-15 16:23:36.067136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.512 [2024-07-15 16:23:36.067259] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.512 [2024-07-15 16:23:36.067286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.512 #22 NEW cov: 12190 ft: 13646 corp: 11/924b lim: 105 exec/s: 0 rss: 72Mb L: 95/95 MS: 1 InsertRepeatedBytes- 00:06:56.771 [2024-07-15 16:23:36.116937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:3482 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.771 [2024-07-15 16:23:36.116972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.771 [2024-07-15 16:23:36.117070] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.771 [2024-07-15 16:23:36.117092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.771 [2024-07-15 16:23:36.117210] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.771 [2024-07-15 16:23:36.117232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.771 [2024-07-15 16:23:36.117358] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.771 [2024-07-15 16:23:36.117383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.771 #23 NEW cov: 12190 ft: 13679 corp: 12/1016b lim: 105 exec/s: 0 rss: 72Mb L: 92/95 MS: 1 ChangeByte- 00:06:56.771 [2024-07-15 16:23:36.157065] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:3482 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.771 [2024-07-15 16:23:36.157101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.771 [2024-07-15 16:23:36.157232] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.772 [2024-07-15 16:23:36.157258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.772 [2024-07-15 16:23:36.157376] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.772 [2024-07-15 16:23:36.157400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.772 [2024-07-15 16:23:36.157525] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.772 [2024-07-15 16:23:36.157550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.772 #24 NEW cov: 12190 ft: 13705 corp: 13/1108b lim: 105 exec/s: 0 rss: 72Mb L: 92/95 MS: 1 ShuffleBytes- 00:06:56.772 [2024-07-15 16:23:36.197178] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:3482 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.772 [2024-07-15 16:23:36.197216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.772 [2024-07-15 16:23:36.197336] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.772 [2024-07-15 16:23:36.197363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.772 [2024-07-15 16:23:36.197491] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.772 [2024-07-15 16:23:36.197516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.772 [2024-07-15 16:23:36.197629] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.772 [2024-07-15 16:23:36.197656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.772 #25 NEW cov: 12190 ft: 13725 corp: 14/1206b lim: 105 exec/s: 0 rss: 72Mb L: 98/98 MS: 1 CopyPart- 00:06:56.772 [2024-07-15 16:23:36.237255] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:3482 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.772 [2024-07-15 16:23:36.237289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.772 [2024-07-15 16:23:36.237371] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.772 [2024-07-15 16:23:36.237393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.772 [2024-07-15 16:23:36.237518] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.772 [2024-07-15 16:23:36.237536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.772 [2024-07-15 16:23:36.237667] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.772 [2024-07-15 16:23:36.237689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.772 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:56.772 #26 NEW cov: 12213 ft: 13778 corp: 15/1310b lim: 105 exec/s: 0 rss: 72Mb L: 104/104 MS: 1 CopyPart- 00:06:56.772 [2024-07-15 16:23:36.277386] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:3482 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.772 [2024-07-15 16:23:36.277418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.772 [2024-07-15 16:23:36.277520] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.772 [2024-07-15 16:23:36.277542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.772 [2024-07-15 16:23:36.277660] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.772 [2024-07-15 16:23:36.277682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.772 [2024-07-15 16:23:36.277808] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.772 [2024-07-15 16:23:36.277832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.772 #27 NEW cov: 12213 ft: 13803 corp: 16/1402b lim: 105 exec/s: 0 rss: 72Mb L: 92/104 MS: 1 CopyPart- 00:06:56.772 [2024-07-15 16:23:36.327465] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.772 [2024-07-15 16:23:36.327499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.772 [2024-07-15 16:23:36.327585] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068039847155964313 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.772 [2024-07-15 16:23:36.327607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.772 [2024-07-15 16:23:36.327727] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.772 [2024-07-15 16:23:36.327752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.772 [2024-07-15 16:23:36.327870] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.772 [2024-07-15 16:23:36.327893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.772 #28 NEW cov: 12213 ft: 13822 corp: 17/1495b lim: 105 exec/s: 0 rss: 72Mb L: 93/104 MS: 1 InsertByte- 00:06:57.032 [2024-07-15 16:23:36.367699] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:62311 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.032 [2024-07-15 16:23:36.367733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.032 [2024-07-15 16:23:36.367830] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11067926597458303385 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.032 [2024-07-15 16:23:36.367851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.032 [2024-07-15 16:23:36.367972] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.032 [2024-07-15 16:23:36.367997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.032 [2024-07-15 16:23:36.368115] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.032 [2024-07-15 16:23:36.368140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.032 #29 NEW cov: 12213 ft: 13843 corp: 18/1587b lim: 105 exec/s: 29 rss: 72Mb L: 92/104 MS: 1 ChangeBinInt- 00:06:57.032 [2024-07-15 16:23:36.417817] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.032 [2024-07-15 16:23:36.417849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.032 [2024-07-15 16:23:36.417949] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.032 [2024-07-15 16:23:36.417971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.032 [2024-07-15 16:23:36.418097] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.032 [2024-07-15 16:23:36.418124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.032 [2024-07-15 16:23:36.418250] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744071991066623 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.032 [2024-07-15 16:23:36.418275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.032 #30 NEW cov: 12213 ft: 13917 corp: 19/1686b lim: 105 exec/s: 30 rss: 72Mb L: 99/104 MS: 1 InsertRepeatedBytes- 00:06:57.032 [2024-07-15 16:23:36.467962] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:3482 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.032 [2024-07-15 16:23:36.468002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.032 [2024-07-15 16:23:36.468125] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.032 [2024-07-15 16:23:36.468146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.032 [2024-07-15 16:23:36.468266] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.032 [2024-07-15 16:23:36.468290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.032 [2024-07-15 16:23:36.468416] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11428334414415371934 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.032 [2024-07-15 16:23:36.468440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.032 #31 NEW cov: 12213 ft: 13949 corp: 20/1781b lim: 105 exec/s: 31 rss: 72Mb L: 95/104 MS: 1 CopyPart- 00:06:57.032 [2024-07-15 16:23:36.518100] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:3482 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.032 [2024-07-15 16:23:36.518134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.032 [2024-07-15 16:23:36.518233] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.032 [2024-07-15 16:23:36.518254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.032 [2024-07-15 16:23:36.518378] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.032 [2024-07-15 16:23:36.518401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.032 [2024-07-15 16:23:36.518533] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.032 [2024-07-15 16:23:36.518559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.032 #32 NEW cov: 12213 ft: 13975 corp: 21/1885b lim: 105 exec/s: 32 rss: 73Mb L: 104/104 MS: 1 ShuffleBytes- 00:06:57.032 [2024-07-15 16:23:36.568264] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.032 [2024-07-15 16:23:36.568299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.032 [2024-07-15 16:23:36.568420] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068039847155964313 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.032 [2024-07-15 16:23:36.568448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.032 [2024-07-15 16:23:36.568569] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.033 [2024-07-15 16:23:36.568593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.033 [2024-07-15 16:23:36.568709] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.033 [2024-07-15 16:23:36.568734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.033 #33 NEW cov: 12213 ft: 14038 corp: 22/1978b lim: 105 exec/s: 33 rss: 73Mb L: 93/104 MS: 1 ShuffleBytes- 00:06:57.033 [2024-07-15 16:23:36.618477] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.033 [2024-07-15 16:23:36.618508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.033 [2024-07-15 16:23:36.618591] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.033 [2024-07-15 16:23:36.618623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.033 [2024-07-15 16:23:36.618737] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.033 [2024-07-15 16:23:36.618760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.033 [2024-07-15 16:23:36.618879] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:6600475613874198937 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.033 [2024-07-15 16:23:36.618903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.292 #34 NEW cov: 12213 ft: 14054 corp: 23/2071b lim: 105 exec/s: 34 rss: 73Mb L: 93/104 MS: 1 InsertByte- 00:06:57.292 [2024-07-15 16:23:36.668562] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:17538818346740979969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.292 [2024-07-15 16:23:36.668594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.292 [2024-07-15 16:23:36.668695] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068046442397014425 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.292 [2024-07-15 16:23:36.668725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.292 [2024-07-15 16:23:36.668845] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.292 [2024-07-15 16:23:36.668870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.292 [2024-07-15 16:23:36.668991] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.292 [2024-07-15 16:23:36.669017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.292 #35 NEW cov: 12213 ft: 14064 corp: 24/2157b lim: 105 exec/s: 35 rss: 73Mb L: 86/104 MS: 1 EraseBytes- 00:06:57.292 [2024-07-15 16:23:36.718716] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.292 [2024-07-15 16:23:36.718752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.292 [2024-07-15 16:23:36.718872] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.292 [2024-07-15 16:23:36.718896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.292 [2024-07-15 16:23:36.719020] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.292 [2024-07-15 16:23:36.719043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.292 [2024-07-15 16:23:36.719164] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.292 [2024-07-15 16:23:36.719185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.292 #36 NEW cov: 12213 ft: 14081 corp: 25/2261b lim: 105 exec/s: 36 rss: 73Mb L: 104/104 MS: 1 CrossOver- 00:06:57.292 [2024-07-15 16:23:36.768855] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.292 [2024-07-15 16:23:36.768890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.292 [2024-07-15 16:23:36.769004] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068039847155964313 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.292 [2024-07-15 16:23:36.769027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.292 [2024-07-15 16:23:36.769148] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.292 [2024-07-15 16:23:36.769175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.292 [2024-07-15 16:23:36.769294] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.292 [2024-07-15 16:23:36.769315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.292 #37 NEW cov: 12213 ft: 14094 corp: 26/2354b lim: 105 exec/s: 37 rss: 73Mb L: 93/104 MS: 1 CopyPart- 00:06:57.292 [2024-07-15 16:23:36.819014] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.292 [2024-07-15 16:23:36.819047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.292 [2024-07-15 16:23:36.819154] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068039847155964313 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.292 [2024-07-15 16:23:36.819175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.292 [2024-07-15 16:23:36.819290] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.292 [2024-07-15 16:23:36.819314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.292 [2024-07-15 16:23:36.819437] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.292 [2024-07-15 16:23:36.819465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.292 #38 NEW cov: 12213 ft: 14133 corp: 27/2447b lim: 105 exec/s: 38 rss: 74Mb L: 93/104 MS: 1 PersAutoDict- DE: "\001\015"- 00:06:57.292 [2024-07-15 16:23:36.869143] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.292 [2024-07-15 16:23:36.869180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.292 [2024-07-15 16:23:36.869311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.292 [2024-07-15 16:23:36.869336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.292 [2024-07-15 16:23:36.869472] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.292 [2024-07-15 16:23:36.869495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.292 [2024-07-15 16:23:36.869618] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18229723129993559292 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.292 [2024-07-15 16:23:36.869639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.552 #39 NEW cov: 12213 ft: 14156 corp: 28/2551b lim: 105 exec/s: 39 rss: 74Mb L: 104/104 MS: 1 InsertRepeatedBytes- 00:06:57.552 [2024-07-15 16:23:36.909316] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:3482 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.552 [2024-07-15 16:23:36.909353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.552 [2024-07-15 16:23:36.909480] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.552 [2024-07-15 16:23:36.909504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.552 [2024-07-15 16:23:36.909627] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.552 [2024-07-15 16:23:36.909651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.552 [2024-07-15 16:23:36.909770] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.552 [2024-07-15 16:23:36.909793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.552 #40 NEW cov: 12213 ft: 14171 corp: 29/2649b lim: 105 exec/s: 40 rss: 74Mb L: 98/104 MS: 1 ChangeBit- 00:06:57.552 [2024-07-15 16:23:36.959311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:3482 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.552 [2024-07-15 16:23:36.959344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.552 [2024-07-15 16:23:36.959439] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.552 [2024-07-15 16:23:36.959465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.552 [2024-07-15 16:23:36.959582] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.552 [2024-07-15 16:23:36.959605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.552 [2024-07-15 16:23:36.959753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.552 [2024-07-15 16:23:36.959777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.552 #41 NEW cov: 12213 ft: 14182 corp: 30/2741b lim: 105 exec/s: 41 rss: 74Mb L: 92/104 MS: 1 ChangeBinInt- 00:06:57.552 [2024-07-15 16:23:36.999460] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:30106 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.552 [2024-07-15 16:23:36.999498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.552 [2024-07-15 16:23:36.999612] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.552 [2024-07-15 16:23:36.999636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.552 [2024-07-15 16:23:36.999768] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.552 [2024-07-15 16:23:36.999790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.553 [2024-07-15 16:23:36.999911] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:10520408727819491737 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.553 [2024-07-15 16:23:36.999934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.553 #42 NEW cov: 12213 ft: 14190 corp: 31/2843b lim: 105 exec/s: 42 rss: 74Mb L: 102/104 MS: 1 InsertRepeatedBytes- 00:06:57.553 [2024-07-15 16:23:37.049584] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:3482 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.553 [2024-07-15 16:23:37.049616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.553 [2024-07-15 16:23:37.049709] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068045790384069017 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.553 [2024-07-15 16:23:37.049731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.553 [2024-07-15 16:23:37.049853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.553 [2024-07-15 16:23:37.049875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.553 [2024-07-15 16:23:37.049991] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11068023354481547673 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.553 [2024-07-15 16:23:37.050015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.553 #43 NEW cov: 12213 ft: 14254 corp: 32/2942b lim: 105 exec/s: 43 rss: 74Mb L: 99/104 MS: 1 CrossOver- 00:06:57.553 [2024-07-15 16:23:37.099452] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:3482 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.553 [2024-07-15 16:23:37.099481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.553 [2024-07-15 16:23:37.099602] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.553 [2024-07-15 16:23:37.099620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.553 #44 NEW cov: 12213 ft: 14800 corp: 33/2999b lim: 105 exec/s: 44 rss: 74Mb L: 57/104 MS: 1 EraseBytes- 00:06:57.553 [2024-07-15 16:23:37.139874] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.553 [2024-07-15 16:23:37.139906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.553 [2024-07-15 16:23:37.140006] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068039847155964313 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.553 [2024-07-15 16:23:37.140027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.553 [2024-07-15 16:23:37.140133] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.553 [2024-07-15 16:23:37.140158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.553 [2024-07-15 16:23:37.140274] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11068046444225730860 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.553 [2024-07-15 16:23:37.140295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.813 #45 NEW cov: 12213 ft: 14833 corp: 34/3092b lim: 105 exec/s: 45 rss: 74Mb L: 93/104 MS: 1 CrossOver- 00:06:57.813 [2024-07-15 16:23:37.180009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.813 [2024-07-15 16:23:37.180041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.813 [2024-07-15 16:23:37.180124] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.813 [2024-07-15 16:23:37.180149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.813 [2024-07-15 16:23:37.180265] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.813 [2024-07-15 16:23:37.180288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.813 [2024-07-15 16:23:37.180407] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744071991066623 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.813 [2024-07-15 16:23:37.180430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.813 #46 NEW cov: 12213 ft: 14879 corp: 35/3191b lim: 105 exec/s: 46 rss: 74Mb L: 99/104 MS: 1 ChangeByte- 00:06:57.813 [2024-07-15 16:23:37.219816] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:3482 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.813 [2024-07-15 16:23:37.219844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.813 [2024-07-15 16:23:37.219971] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.813 [2024-07-15 16:23:37.219993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.813 #47 NEW cov: 12213 ft: 14908 corp: 36/3248b lim: 105 exec/s: 47 rss: 74Mb L: 57/104 MS: 1 ChangeByte- 00:06:57.813 [2024-07-15 16:23:37.269930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:3482 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.813 [2024-07-15 16:23:37.269964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.813 [2024-07-15 16:23:37.270081] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.813 [2024-07-15 16:23:37.270103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.813 #48 NEW cov: 12213 ft: 14913 corp: 37/3305b lim: 105 exec/s: 48 rss: 74Mb L: 57/104 MS: 1 PersAutoDict- DE: "\001\015"- 00:06:57.813 [2024-07-15 16:23:37.320427] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.813 [2024-07-15 16:23:37.320460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.813 [2024-07-15 16:23:37.320551] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068039847155964313 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.813 [2024-07-15 16:23:37.320570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.813 [2024-07-15 16:23:37.320693] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.813 [2024-07-15 16:23:37.320720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.813 [2024-07-15 16:23:37.320839] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11068046444225730969 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.813 [2024-07-15 16:23:37.320855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.813 #49 NEW cov: 12213 ft: 14925 corp: 38/3398b lim: 105 exec/s: 24 rss: 74Mb L: 93/104 MS: 1 ChangeBinInt- 00:06:57.813 #49 DONE cov: 12213 ft: 14925 corp: 38/3398b lim: 105 exec/s: 24 rss: 74Mb 00:06:57.813 ###### Recommended dictionary. ###### 00:06:57.813 "\001\015" # Uses: 2 00:06:57.813 ###### End of recommended dictionary. ###### 00:06:57.813 Done 49 runs in 2 second(s) 00:06:58.073 16:23:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:06:58.073 16:23:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:58.073 16:23:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:58.073 16:23:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:06:58.073 16:23:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:06:58.073 16:23:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:58.073 16:23:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:58.073 16:23:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:58.073 16:23:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:06:58.073 16:23:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:58.073 16:23:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:58.073 16:23:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:06:58.073 16:23:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:06:58.073 16:23:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:58.073 16:23:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:06:58.073 16:23:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:58.073 16:23:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:58.073 16:23:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:58.073 16:23:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:06:58.073 [2024-07-15 16:23:37.523196] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:58.073 [2024-07-15 16:23:37.523267] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2029647 ] 00:06:58.073 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.333 [2024-07-15 16:23:37.711299] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.333 [2024-07-15 16:23:37.777475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.333 [2024-07-15 16:23:37.836311] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:58.333 [2024-07-15 16:23:37.852589] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:06:58.333 INFO: Running with entropic power schedule (0xFF, 100). 00:06:58.333 INFO: Seed: 192554137 00:06:58.333 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:58.333 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:58.333 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:58.333 INFO: A corpus is not provided, starting from an empty corpus 00:06:58.333 #2 INITED exec/s: 0 rss: 63Mb 00:06:58.333 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:58.333 This may also happen if the target rejected all inputs we tried so far 00:06:58.333 [2024-07-15 16:23:37.922019] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11212726788660566939 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.333 [2024-07-15 16:23:37.922058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.333 [2024-07-15 16:23:37.922175] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.333 [2024-07-15 16:23:37.922196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.333 [2024-07-15 16:23:37.922319] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.333 [2024-07-15 16:23:37.922345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.851 NEW_FUNC[1/697]: 0x49dcc0 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:06:58.851 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:58.851 #15 NEW cov: 11990 ft: 11991 corp: 2/77b lim: 120 exec/s: 0 rss: 70Mb L: 76/76 MS: 3 InsertByte-ChangeByte-InsertRepeatedBytes- 00:06:58.851 [2024-07-15 16:23:38.262904] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11212726788660566939 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.851 [2024-07-15 16:23:38.262952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.851 [2024-07-15 16:23:38.263072] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.851 [2024-07-15 16:23:38.263097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.851 [2024-07-15 16:23:38.263224] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.851 [2024-07-15 16:23:38.263248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.851 #16 NEW cov: 12120 ft: 12617 corp: 3/153b lim: 120 exec/s: 0 rss: 70Mb L: 76/76 MS: 1 CopyPart- 00:06:58.851 [2024-07-15 16:23:38.322929] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11212726788660566939 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.851 [2024-07-15 16:23:38.322965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.851 [2024-07-15 16:23:38.323081] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.851 [2024-07-15 16:23:38.323105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.851 [2024-07-15 16:23:38.323230] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.851 [2024-07-15 16:23:38.323253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.851 #17 NEW cov: 12126 ft: 12783 corp: 4/230b lim: 120 exec/s: 0 rss: 70Mb L: 77/77 MS: 1 InsertByte- 00:06:58.851 [2024-07-15 16:23:38.362757] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11212726788660566939 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.851 [2024-07-15 16:23:38.362790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.851 [2024-07-15 16:23:38.362895] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.851 [2024-07-15 16:23:38.362920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.851 [2024-07-15 16:23:38.363035] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:9483344532587381635 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.851 [2024-07-15 16:23:38.363060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.851 #18 NEW cov: 12211 ft: 13027 corp: 5/321b lim: 120 exec/s: 0 rss: 70Mb L: 91/91 MS: 1 InsertRepeatedBytes- 00:06:58.851 [2024-07-15 16:23:38.413072] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11212726788660566939 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.851 [2024-07-15 16:23:38.413107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.851 [2024-07-15 16:23:38.413224] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.851 [2024-07-15 16:23:38.413246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.851 [2024-07-15 16:23:38.413364] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:9483344528292414339 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.851 [2024-07-15 16:23:38.413389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.110 #19 NEW cov: 12211 ft: 13187 corp: 6/412b lim: 120 exec/s: 0 rss: 70Mb L: 91/91 MS: 1 ChangeBit- 00:06:59.110 [2024-07-15 16:23:38.473603] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11212726788660566939 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.110 [2024-07-15 16:23:38.473639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.110 [2024-07-15 16:23:38.473730] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.110 [2024-07-15 16:23:38.473755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.110 [2024-07-15 16:23:38.473875] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.110 [2024-07-15 16:23:38.473897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.110 #20 NEW cov: 12211 ft: 13405 corp: 7/489b lim: 120 exec/s: 0 rss: 70Mb L: 77/91 MS: 1 InsertByte- 00:06:59.110 [2024-07-15 16:23:38.533617] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11212726788660566939 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.111 [2024-07-15 16:23:38.533649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.111 [2024-07-15 16:23:38.533743] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.111 [2024-07-15 16:23:38.533764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.111 [2024-07-15 16:23:38.533893] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.111 [2024-07-15 16:23:38.533917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.111 #21 NEW cov: 12211 ft: 13502 corp: 8/566b lim: 120 exec/s: 0 rss: 71Mb L: 77/91 MS: 1 ChangeByte- 00:06:59.111 [2024-07-15 16:23:38.593841] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11212726788660566939 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.111 [2024-07-15 16:23:38.593873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.111 [2024-07-15 16:23:38.593971] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.111 [2024-07-15 16:23:38.593994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.111 [2024-07-15 16:23:38.594123] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.111 [2024-07-15 16:23:38.594145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.111 #22 NEW cov: 12211 ft: 13590 corp: 9/642b lim: 120 exec/s: 0 rss: 71Mb L: 76/91 MS: 1 ChangeBit- 00:06:59.111 [2024-07-15 16:23:38.644206] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11212726788660566939 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.111 [2024-07-15 16:23:38.644238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.111 [2024-07-15 16:23:38.644314] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.111 [2024-07-15 16:23:38.644338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.111 [2024-07-15 16:23:38.644464] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:9483344532587381635 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.111 [2024-07-15 16:23:38.644490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.111 [2024-07-15 16:23:38.644609] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:9476562642192276355 len:39580 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.111 [2024-07-15 16:23:38.644634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.111 #23 NEW cov: 12211 ft: 13992 corp: 10/760b lim: 120 exec/s: 0 rss: 71Mb L: 118/118 MS: 1 CopyPart- 00:06:59.371 [2024-07-15 16:23:38.704379] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14178673873146266820 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.371 [2024-07-15 16:23:38.704411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.371 [2024-07-15 16:23:38.704497] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14178673876263027908 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.371 [2024-07-15 16:23:38.704522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.371 [2024-07-15 16:23:38.704643] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:14178673876263027908 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.371 [2024-07-15 16:23:38.704667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.371 [2024-07-15 16:23:38.704792] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:14178673876263027908 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.371 [2024-07-15 16:23:38.704813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.371 #28 NEW cov: 12211 ft: 14117 corp: 11/856b lim: 120 exec/s: 0 rss: 71Mb L: 96/118 MS: 5 InsertByte-CrossOver-CopyPart-ShuffleBytes-InsertRepeatedBytes- 00:06:59.371 [2024-07-15 16:23:38.753954] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11212726788660566939 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.371 [2024-07-15 16:23:38.753992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.371 [2024-07-15 16:23:38.754116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.371 [2024-07-15 16:23:38.754139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.371 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:59.371 #29 NEW cov: 12234 ft: 14456 corp: 12/921b lim: 120 exec/s: 0 rss: 71Mb L: 65/118 MS: 1 EraseBytes- 00:06:59.371 [2024-07-15 16:23:38.814462] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11212726788660566939 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.371 [2024-07-15 16:23:38.814491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.371 [2024-07-15 16:23:38.814577] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.371 [2024-07-15 16:23:38.814601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.371 [2024-07-15 16:23:38.814730] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.371 [2024-07-15 16:23:38.814755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.371 #30 NEW cov: 12234 ft: 14497 corp: 13/997b lim: 120 exec/s: 0 rss: 71Mb L: 76/118 MS: 1 CopyPart- 00:06:59.371 [2024-07-15 16:23:38.864838] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14178673873146266820 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.371 [2024-07-15 16:23:38.864871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.371 [2024-07-15 16:23:38.864977] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14178673876263027908 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.371 [2024-07-15 16:23:38.865002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.371 [2024-07-15 16:23:38.865126] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:14178673876263027908 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.371 [2024-07-15 16:23:38.865152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.371 [2024-07-15 16:23:38.865270] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:14178673876263027908 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.371 [2024-07-15 16:23:38.865294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.371 #31 NEW cov: 12234 ft: 14537 corp: 14/1094b lim: 120 exec/s: 31 rss: 71Mb L: 97/118 MS: 1 InsertByte- 00:06:59.371 [2024-07-15 16:23:38.914716] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11212726788660435867 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.371 [2024-07-15 16:23:38.914751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.371 [2024-07-15 16:23:38.914853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.371 [2024-07-15 16:23:38.914875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.371 [2024-07-15 16:23:38.914998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.371 [2024-07-15 16:23:38.915020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.371 #32 NEW cov: 12234 ft: 14641 corp: 15/1171b lim: 120 exec/s: 32 rss: 71Mb L: 77/118 MS: 1 ChangeBit- 00:06:59.371 [2024-07-15 16:23:38.954425] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11212726788660566939 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.371 [2024-07-15 16:23:38.954459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.371 [2024-07-15 16:23:38.954566] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.371 [2024-07-15 16:23:38.954582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.630 #33 NEW cov: 12234 ft: 14661 corp: 16/1236b lim: 120 exec/s: 33 rss: 71Mb L: 65/118 MS: 1 ChangeBit- 00:06:59.630 [2024-07-15 16:23:39.015357] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14178673873146266820 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.630 [2024-07-15 16:23:39.015391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.630 [2024-07-15 16:23:39.015475] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14178673876263027908 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.630 [2024-07-15 16:23:39.015504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.630 [2024-07-15 16:23:39.015622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:14178673876263027908 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.630 [2024-07-15 16:23:39.015646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.630 [2024-07-15 16:23:39.015769] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:14178673876263027908 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.630 [2024-07-15 16:23:39.015794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.630 #34 NEW cov: 12234 ft: 14691 corp: 17/1334b lim: 120 exec/s: 34 rss: 71Mb L: 98/118 MS: 1 InsertByte- 00:06:59.630 [2024-07-15 16:23:39.074920] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11212726788660566939 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.630 [2024-07-15 16:23:39.074952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.630 [2024-07-15 16:23:39.075073] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.631 [2024-07-15 16:23:39.075098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.631 #35 NEW cov: 12234 ft: 14735 corp: 18/1399b lim: 120 exec/s: 35 rss: 71Mb L: 65/118 MS: 1 ChangeByte- 00:06:59.631 [2024-07-15 16:23:39.115046] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14178673873146266820 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.631 [2024-07-15 16:23:39.115080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.631 [2024-07-15 16:23:39.115208] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11212726788660566939 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.631 [2024-07-15 16:23:39.115232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.631 #36 NEW cov: 12234 ft: 14756 corp: 19/1458b lim: 120 exec/s: 36 rss: 72Mb L: 59/118 MS: 1 CrossOver- 00:06:59.631 [2024-07-15 16:23:39.165524] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11212726788660566939 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.631 [2024-07-15 16:23:39.165558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.631 [2024-07-15 16:23:39.165649] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.631 [2024-07-15 16:23:39.165672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.631 [2024-07-15 16:23:39.165797] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:9483344532587381635 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.631 [2024-07-15 16:23:39.165820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.631 #37 NEW cov: 12234 ft: 14768 corp: 20/1549b lim: 120 exec/s: 37 rss: 72Mb L: 91/118 MS: 1 CrossOver- 00:06:59.631 [2024-07-15 16:23:39.215696] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11212726788660566939 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.631 [2024-07-15 16:23:39.215727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.631 [2024-07-15 16:23:39.215849] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.631 [2024-07-15 16:23:39.215872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.631 [2024-07-15 16:23:39.215990] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11212726789901884170 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.631 [2024-07-15 16:23:39.216012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.891 #38 NEW cov: 12234 ft: 14795 corp: 21/1626b lim: 120 exec/s: 38 rss: 72Mb L: 77/118 MS: 1 CrossOver- 00:06:59.891 [2024-07-15 16:23:39.265763] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11212726788660566939 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.891 [2024-07-15 16:23:39.265797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.891 [2024-07-15 16:23:39.265890] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.891 [2024-07-15 16:23:39.265913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.891 [2024-07-15 16:23:39.266036] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:9483344528292414339 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.891 [2024-07-15 16:23:39.266062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.891 #39 NEW cov: 12234 ft: 14815 corp: 22/1717b lim: 120 exec/s: 39 rss: 72Mb L: 91/118 MS: 1 ChangeByte- 00:06:59.891 [2024-07-15 16:23:39.315973] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11212726788660566939 len:34204 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.891 [2024-07-15 16:23:39.316007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.891 [2024-07-15 16:23:39.316108] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.891 [2024-07-15 16:23:39.316132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.891 [2024-07-15 16:23:39.316254] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.891 [2024-07-15 16:23:39.316279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.891 #40 NEW cov: 12234 ft: 14828 corp: 23/1795b lim: 120 exec/s: 40 rss: 72Mb L: 78/118 MS: 1 InsertByte- 00:06:59.891 [2024-07-15 16:23:39.376162] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11212726788660566939 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.891 [2024-07-15 16:23:39.376197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.891 [2024-07-15 16:23:39.376311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.891 [2024-07-15 16:23:39.376335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.891 [2024-07-15 16:23:39.376468] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.891 [2024-07-15 16:23:39.376502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.891 #41 NEW cov: 12234 ft: 14836 corp: 24/1872b lim: 120 exec/s: 41 rss: 72Mb L: 77/118 MS: 1 InsertByte- 00:06:59.891 [2024-07-15 16:23:39.426252] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11212726788660566939 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.891 [2024-07-15 16:23:39.426287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.891 [2024-07-15 16:23:39.426407] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.891 [2024-07-15 16:23:39.426429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.891 [2024-07-15 16:23:39.426546] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:9476589133146325891 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.891 [2024-07-15 16:23:39.426568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.891 #42 NEW cov: 12234 ft: 14854 corp: 25/1964b lim: 120 exec/s: 42 rss: 72Mb L: 92/118 MS: 1 InsertByte- 00:06:59.891 [2024-07-15 16:23:39.476675] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14178673873146266820 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.891 [2024-07-15 16:23:39.476708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.891 [2024-07-15 16:23:39.476834] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:216346092634112 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.891 [2024-07-15 16:23:39.476854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.891 [2024-07-15 16:23:39.476973] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:14178673876263027908 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.891 [2024-07-15 16:23:39.477001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.891 [2024-07-15 16:23:39.477124] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:14178673876263027908 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.891 [2024-07-15 16:23:39.477150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.150 #43 NEW cov: 12234 ft: 14873 corp: 26/2060b lim: 120 exec/s: 43 rss: 72Mb L: 96/118 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:07:00.150 [2024-07-15 16:23:39.526276] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11212726788660566939 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.150 [2024-07-15 16:23:39.526303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.151 [2024-07-15 16:23:39.526435] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.151 [2024-07-15 16:23:39.526464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.151 #44 NEW cov: 12234 ft: 14879 corp: 27/2125b lim: 120 exec/s: 44 rss: 72Mb L: 65/118 MS: 1 ShuffleBytes- 00:07:00.151 [2024-07-15 16:23:39.586501] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11212726788660566939 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.151 [2024-07-15 16:23:39.586533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.151 [2024-07-15 16:23:39.586668] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.151 [2024-07-15 16:23:39.586697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.151 #45 NEW cov: 12234 ft: 14890 corp: 28/2195b lim: 120 exec/s: 45 rss: 72Mb L: 70/118 MS: 1 EraseBytes- 00:07:00.151 [2024-07-15 16:23:39.647234] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11212726788660566939 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.151 [2024-07-15 16:23:39.647267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.151 [2024-07-15 16:23:39.647377] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.151 [2024-07-15 16:23:39.647401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.151 [2024-07-15 16:23:39.647528] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.151 [2024-07-15 16:23:39.647552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.151 [2024-07-15 16:23:39.647674] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.151 [2024-07-15 16:23:39.647697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.151 #46 NEW cov: 12234 ft: 14892 corp: 29/2310b lim: 120 exec/s: 46 rss: 72Mb L: 115/118 MS: 1 CopyPart- 00:07:00.151 [2024-07-15 16:23:39.697281] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14178673873146266820 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.151 [2024-07-15 16:23:39.697313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.151 [2024-07-15 16:23:39.697397] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14178673876263027908 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.151 [2024-07-15 16:23:39.697417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.151 [2024-07-15 16:23:39.697542] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:14178673876263027908 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.151 [2024-07-15 16:23:39.697568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.151 [2024-07-15 16:23:39.697688] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:14178673876263027908 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.151 [2024-07-15 16:23:39.697710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.151 #47 NEW cov: 12234 ft: 14905 corp: 30/2407b lim: 120 exec/s: 47 rss: 72Mb L: 97/118 MS: 1 ShuffleBytes- 00:07:00.151 [2024-07-15 16:23:39.736777] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11212726788660566939 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.151 [2024-07-15 16:23:39.736807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.151 [2024-07-15 16:23:39.736917] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.151 [2024-07-15 16:23:39.736941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.151 [2024-07-15 16:23:39.737061] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.151 [2024-07-15 16:23:39.737090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.411 #48 NEW cov: 12234 ft: 14935 corp: 31/2482b lim: 120 exec/s: 48 rss: 72Mb L: 75/118 MS: 1 EraseBytes- 00:07:00.411 [2024-07-15 16:23:39.777097] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11212726788660566939 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.411 [2024-07-15 16:23:39.777127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.411 [2024-07-15 16:23:39.777212] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.411 [2024-07-15 16:23:39.777236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.411 [2024-07-15 16:23:39.777356] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:1374463283923456787 len:4884 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.411 [2024-07-15 16:23:39.777380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.411 [2024-07-15 16:23:39.777516] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:1374613401620386579 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.411 [2024-07-15 16:23:39.777537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.411 #49 NEW cov: 12234 ft: 14965 corp: 32/2596b lim: 120 exec/s: 49 rss: 72Mb L: 114/118 MS: 1 InsertRepeatedBytes- 00:07:00.411 [2024-07-15 16:23:39.827516] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11212645437685013403 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.411 [2024-07-15 16:23:39.827558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.411 [2024-07-15 16:23:39.827662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.411 [2024-07-15 16:23:39.827688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.411 [2024-07-15 16:23:39.827811] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11212726789901884201 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.411 [2024-07-15 16:23:39.827833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.411 #50 NEW cov: 12234 ft: 14966 corp: 33/2676b lim: 120 exec/s: 50 rss: 72Mb L: 80/118 MS: 1 CrossOver- 00:07:00.411 [2024-07-15 16:23:39.867527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14178673873146266820 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.411 [2024-07-15 16:23:39.867558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.411 [2024-07-15 16:23:39.867685] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14178673876263027908 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.411 [2024-07-15 16:23:39.867708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.411 [2024-07-15 16:23:39.867824] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:14178673876263027908 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.411 [2024-07-15 16:23:39.867849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.411 [2024-07-15 16:23:39.867972] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:14178673034449437892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.411 [2024-07-15 16:23:39.867996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.411 #51 NEW cov: 12234 ft: 14983 corp: 34/2778b lim: 120 exec/s: 51 rss: 72Mb L: 102/118 MS: 1 InsertRepeatedBytes- 00:07:00.411 [2024-07-15 16:23:39.918054] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11212726788660566939 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.411 [2024-07-15 16:23:39.918087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.411 [2024-07-15 16:23:39.918186] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11212726789901884315 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.411 [2024-07-15 16:23:39.918210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.411 [2024-07-15 16:23:39.918329] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:9483344532587381635 len:39836 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.411 [2024-07-15 16:23:39.918352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.411 [2024-07-15 16:23:39.918488] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:9476562642192276355 len:39580 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.411 [2024-07-15 16:23:39.918508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.411 #52 NEW cov: 12234 ft: 15005 corp: 35/2896b lim: 120 exec/s: 26 rss: 72Mb L: 118/118 MS: 1 ChangeBit- 00:07:00.411 #52 DONE cov: 12234 ft: 15005 corp: 35/2896b lim: 120 exec/s: 26 rss: 72Mb 00:07:00.411 ###### Recommended dictionary. ###### 00:07:00.411 "\000\000\000\000\000\000\000\000" # Uses: 0 00:07:00.411 ###### End of recommended dictionary. ###### 00:07:00.411 Done 52 runs in 2 second(s) 00:07:00.671 16:23:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:07:00.671 16:23:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:00.671 16:23:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:00.671 16:23:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:07:00.671 16:23:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:07:00.671 16:23:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:00.671 16:23:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:00.671 16:23:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:00.671 16:23:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:07:00.671 16:23:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:00.671 16:23:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:00.671 16:23:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:07:00.671 16:23:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:07:00.671 16:23:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:00.671 16:23:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:07:00.671 16:23:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:00.671 16:23:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:00.671 16:23:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:00.671 16:23:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:07:00.671 [2024-07-15 16:23:40.114225] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:07:00.671 [2024-07-15 16:23:40.114306] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2030195 ] 00:07:00.671 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.930 [2024-07-15 16:23:40.302512] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.930 [2024-07-15 16:23:40.369883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.930 [2024-07-15 16:23:40.428946] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:00.930 [2024-07-15 16:23:40.445270] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:07:00.930 INFO: Running with entropic power schedule (0xFF, 100). 00:07:00.930 INFO: Seed: 2785556526 00:07:00.930 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:00.930 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:00.930 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:00.930 INFO: A corpus is not provided, starting from an empty corpus 00:07:00.930 #2 INITED exec/s: 0 rss: 65Mb 00:07:00.930 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:00.930 This may also happen if the target rejected all inputs we tried so far 00:07:00.930 [2024-07-15 16:23:40.514998] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:00.930 [2024-07-15 16:23:40.515038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.930 [2024-07-15 16:23:40.515150] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:00.930 [2024-07-15 16:23:40.515168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.930 [2024-07-15 16:23:40.515287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:00.930 [2024-07-15 16:23:40.515306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.930 [2024-07-15 16:23:40.515422] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:00.930 [2024-07-15 16:23:40.515447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.449 NEW_FUNC[1/695]: 0x4a15b0 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:07:01.449 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:01.449 #3 NEW cov: 11933 ft: 11934 corp: 2/88b lim: 100 exec/s: 0 rss: 71Mb L: 87/87 MS: 1 InsertRepeatedBytes- 00:07:01.449 [2024-07-15 16:23:40.865984] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:01.449 [2024-07-15 16:23:40.866022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.449 [2024-07-15 16:23:40.866153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:01.449 [2024-07-15 16:23:40.866175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.449 [2024-07-15 16:23:40.866291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:01.449 [2024-07-15 16:23:40.866316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.449 [2024-07-15 16:23:40.866439] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:01.449 [2024-07-15 16:23:40.866465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.449 #4 NEW cov: 12063 ft: 12586 corp: 3/175b lim: 100 exec/s: 0 rss: 71Mb L: 87/87 MS: 1 ChangeByte- 00:07:01.449 [2024-07-15 16:23:40.935838] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:01.449 [2024-07-15 16:23:40.935872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.449 [2024-07-15 16:23:40.935988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:01.449 [2024-07-15 16:23:40.936013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.449 [2024-07-15 16:23:40.936131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:01.449 [2024-07-15 16:23:40.936152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.449 #9 NEW cov: 12069 ft: 13069 corp: 4/239b lim: 100 exec/s: 0 rss: 71Mb L: 64/87 MS: 5 ChangeBinInt-InsertByte-EraseBytes-ChangeBit-CrossOver- 00:07:01.449 [2024-07-15 16:23:40.986150] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:01.449 [2024-07-15 16:23:40.986183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.449 [2024-07-15 16:23:40.986270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:01.449 [2024-07-15 16:23:40.986293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.449 [2024-07-15 16:23:40.986404] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:01.449 [2024-07-15 16:23:40.986425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.449 [2024-07-15 16:23:40.986529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:01.449 [2024-07-15 16:23:40.986551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.449 #10 NEW cov: 12154 ft: 13317 corp: 5/335b lim: 100 exec/s: 0 rss: 71Mb L: 96/96 MS: 1 InsertRepeatedBytes- 00:07:01.449 [2024-07-15 16:23:41.036117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:01.449 [2024-07-15 16:23:41.036151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.449 [2024-07-15 16:23:41.036262] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:01.449 [2024-07-15 16:23:41.036284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.449 [2024-07-15 16:23:41.036407] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:01.449 [2024-07-15 16:23:41.036423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.708 #16 NEW cov: 12154 ft: 13381 corp: 6/400b lim: 100 exec/s: 0 rss: 72Mb L: 65/96 MS: 1 InsertByte- 00:07:01.708 [2024-07-15 16:23:41.096308] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:01.708 [2024-07-15 16:23:41.096340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.708 [2024-07-15 16:23:41.096460] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:01.708 [2024-07-15 16:23:41.096483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.708 [2024-07-15 16:23:41.096595] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:01.708 [2024-07-15 16:23:41.096621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.708 #17 NEW cov: 12154 ft: 13460 corp: 7/465b lim: 100 exec/s: 0 rss: 72Mb L: 65/96 MS: 1 InsertByte- 00:07:01.708 [2024-07-15 16:23:41.146674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:01.708 [2024-07-15 16:23:41.146705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.708 [2024-07-15 16:23:41.146779] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:01.708 [2024-07-15 16:23:41.146804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.708 [2024-07-15 16:23:41.146917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:01.708 [2024-07-15 16:23:41.146938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.708 [2024-07-15 16:23:41.147056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:01.708 [2024-07-15 16:23:41.147078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.708 #18 NEW cov: 12154 ft: 13526 corp: 8/552b lim: 100 exec/s: 0 rss: 72Mb L: 87/96 MS: 1 ChangeBit- 00:07:01.708 [2024-07-15 16:23:41.196617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:01.708 [2024-07-15 16:23:41.196654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.708 [2024-07-15 16:23:41.196760] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:01.708 [2024-07-15 16:23:41.196781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.708 [2024-07-15 16:23:41.196898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:01.709 [2024-07-15 16:23:41.196924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.709 #19 NEW cov: 12154 ft: 13581 corp: 9/616b lim: 100 exec/s: 0 rss: 72Mb L: 64/96 MS: 1 ShuffleBytes- 00:07:01.709 [2024-07-15 16:23:41.246788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:01.709 [2024-07-15 16:23:41.246820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.709 [2024-07-15 16:23:41.246932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:01.709 [2024-07-15 16:23:41.246952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.709 [2024-07-15 16:23:41.247069] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:01.709 [2024-07-15 16:23:41.247088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.709 #20 NEW cov: 12154 ft: 13627 corp: 10/680b lim: 100 exec/s: 0 rss: 72Mb L: 64/96 MS: 1 CopyPart- 00:07:01.709 [2024-07-15 16:23:41.296993] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:01.709 [2024-07-15 16:23:41.297027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.709 [2024-07-15 16:23:41.297139] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:01.709 [2024-07-15 16:23:41.297162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.709 [2024-07-15 16:23:41.297281] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:01.709 [2024-07-15 16:23:41.297303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.968 #21 NEW cov: 12154 ft: 13679 corp: 11/745b lim: 100 exec/s: 0 rss: 72Mb L: 65/96 MS: 1 ChangeByte- 00:07:01.968 [2024-07-15 16:23:41.357329] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:01.968 [2024-07-15 16:23:41.357360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.968 [2024-07-15 16:23:41.357448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:01.968 [2024-07-15 16:23:41.357482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.968 [2024-07-15 16:23:41.357599] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:01.968 [2024-07-15 16:23:41.357621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.968 [2024-07-15 16:23:41.357734] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:01.968 [2024-07-15 16:23:41.357755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.968 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:01.968 #22 NEW cov: 12177 ft: 13835 corp: 12/832b lim: 100 exec/s: 0 rss: 72Mb L: 87/96 MS: 1 ChangeByte- 00:07:01.968 [2024-07-15 16:23:41.417538] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:01.968 [2024-07-15 16:23:41.417572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.968 [2024-07-15 16:23:41.417664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:01.968 [2024-07-15 16:23:41.417688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.968 [2024-07-15 16:23:41.417810] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:01.968 [2024-07-15 16:23:41.417833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.968 [2024-07-15 16:23:41.417951] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:01.968 [2024-07-15 16:23:41.417974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.968 #23 NEW cov: 12177 ft: 13851 corp: 13/920b lim: 100 exec/s: 0 rss: 72Mb L: 88/96 MS: 1 InsertByte- 00:07:01.968 [2024-07-15 16:23:41.477566] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:01.968 [2024-07-15 16:23:41.477600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.968 [2024-07-15 16:23:41.477718] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:01.968 [2024-07-15 16:23:41.477741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.968 [2024-07-15 16:23:41.477858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:01.968 [2024-07-15 16:23:41.477890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.968 #24 NEW cov: 12177 ft: 13877 corp: 14/985b lim: 100 exec/s: 24 rss: 73Mb L: 65/96 MS: 1 InsertByte- 00:07:01.968 [2024-07-15 16:23:41.537910] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:01.968 [2024-07-15 16:23:41.537945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.968 [2024-07-15 16:23:41.538072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:01.968 [2024-07-15 16:23:41.538091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.968 [2024-07-15 16:23:41.538209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:01.968 [2024-07-15 16:23:41.538233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.968 [2024-07-15 16:23:41.538362] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:01.968 [2024-07-15 16:23:41.538382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.968 #25 NEW cov: 12177 ft: 13894 corp: 15/1072b lim: 100 exec/s: 25 rss: 73Mb L: 87/96 MS: 1 ChangeBit- 00:07:02.228 [2024-07-15 16:23:41.587689] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.228 [2024-07-15 16:23:41.587724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.228 [2024-07-15 16:23:41.587840] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.228 [2024-07-15 16:23:41.587862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.228 #26 NEW cov: 12177 ft: 14246 corp: 16/1131b lim: 100 exec/s: 26 rss: 73Mb L: 59/96 MS: 1 EraseBytes- 00:07:02.228 [2024-07-15 16:23:41.638207] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.228 [2024-07-15 16:23:41.638239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.228 [2024-07-15 16:23:41.638328] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.228 [2024-07-15 16:23:41.638352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.228 [2024-07-15 16:23:41.638465] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:02.228 [2024-07-15 16:23:41.638488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.228 [2024-07-15 16:23:41.638614] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:02.228 [2024-07-15 16:23:41.638637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.228 #27 NEW cov: 12177 ft: 14250 corp: 17/1226b lim: 100 exec/s: 27 rss: 73Mb L: 95/96 MS: 1 CMP- DE: "\377\377~\303L\025\3375"- 00:07:02.228 [2024-07-15 16:23:41.677518] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.228 [2024-07-15 16:23:41.677546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.228 [2024-07-15 16:23:41.677656] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.228 [2024-07-15 16:23:41.677678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.228 #28 NEW cov: 12177 ft: 14306 corp: 18/1285b lim: 100 exec/s: 28 rss: 73Mb L: 59/96 MS: 1 ShuffleBytes- 00:07:02.228 [2024-07-15 16:23:41.728019] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.228 [2024-07-15 16:23:41.728050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.228 [2024-07-15 16:23:41.728146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.228 [2024-07-15 16:23:41.728178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.228 [2024-07-15 16:23:41.728294] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:02.228 [2024-07-15 16:23:41.728318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.228 [2024-07-15 16:23:41.728440] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:02.228 [2024-07-15 16:23:41.728464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.228 #29 NEW cov: 12177 ft: 14320 corp: 19/1381b lim: 100 exec/s: 29 rss: 73Mb L: 96/96 MS: 1 ChangeBinInt- 00:07:02.228 [2024-07-15 16:23:41.778276] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.228 [2024-07-15 16:23:41.778307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.228 [2024-07-15 16:23:41.778379] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.228 [2024-07-15 16:23:41.778401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.228 [2024-07-15 16:23:41.778529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:02.228 [2024-07-15 16:23:41.778552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.228 [2024-07-15 16:23:41.778664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:02.228 [2024-07-15 16:23:41.778683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.228 #30 NEW cov: 12177 ft: 14335 corp: 20/1476b lim: 100 exec/s: 30 rss: 73Mb L: 95/96 MS: 1 CMP- DE: "\301\325\010\004\205\037+\000"- 00:07:02.488 [2024-07-15 16:23:41.838701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.488 [2024-07-15 16:23:41.838732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.488 [2024-07-15 16:23:41.838825] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.488 [2024-07-15 16:23:41.838846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.488 [2024-07-15 16:23:41.838955] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:02.488 [2024-07-15 16:23:41.838976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.488 [2024-07-15 16:23:41.839093] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:02.488 [2024-07-15 16:23:41.839111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.488 #31 NEW cov: 12177 ft: 14348 corp: 21/1572b lim: 100 exec/s: 31 rss: 73Mb L: 96/96 MS: 1 ChangeByte- 00:07:02.488 [2024-07-15 16:23:41.898912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.488 [2024-07-15 16:23:41.898943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.488 [2024-07-15 16:23:41.899061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.488 [2024-07-15 16:23:41.899080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.488 [2024-07-15 16:23:41.899202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:02.488 [2024-07-15 16:23:41.899224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.488 [2024-07-15 16:23:41.899343] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:02.488 [2024-07-15 16:23:41.899368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.488 #32 NEW cov: 12177 ft: 14354 corp: 22/1659b lim: 100 exec/s: 32 rss: 73Mb L: 87/96 MS: 1 ChangeBit- 00:07:02.488 [2024-07-15 16:23:41.949076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.488 [2024-07-15 16:23:41.949107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.488 [2024-07-15 16:23:41.949203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.488 [2024-07-15 16:23:41.949224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.488 [2024-07-15 16:23:41.949338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:02.488 [2024-07-15 16:23:41.949359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.488 [2024-07-15 16:23:41.949488] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:02.488 [2024-07-15 16:23:41.949512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.488 #33 NEW cov: 12177 ft: 14363 corp: 23/1747b lim: 100 exec/s: 33 rss: 73Mb L: 88/96 MS: 1 InsertByte- 00:07:02.488 [2024-07-15 16:23:42.009033] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.488 [2024-07-15 16:23:42.009063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.488 [2024-07-15 16:23:42.009170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.488 [2024-07-15 16:23:42.009203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.488 [2024-07-15 16:23:42.009323] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:02.488 [2024-07-15 16:23:42.009345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.488 #34 NEW cov: 12177 ft: 14368 corp: 24/1808b lim: 100 exec/s: 34 rss: 73Mb L: 61/96 MS: 1 CrossOver- 00:07:02.488 [2024-07-15 16:23:42.069325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.488 [2024-07-15 16:23:42.069357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.488 [2024-07-15 16:23:42.069461] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.488 [2024-07-15 16:23:42.069487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.488 [2024-07-15 16:23:42.069603] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:02.488 [2024-07-15 16:23:42.069624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.488 [2024-07-15 16:23:42.069739] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:02.488 [2024-07-15 16:23:42.069762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.747 #35 NEW cov: 12177 ft: 14457 corp: 25/1906b lim: 100 exec/s: 35 rss: 73Mb L: 98/98 MS: 1 InsertRepeatedBytes- 00:07:02.747 [2024-07-15 16:23:42.119322] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.747 [2024-07-15 16:23:42.119353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.748 [2024-07-15 16:23:42.119446] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.748 [2024-07-15 16:23:42.119468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.748 [2024-07-15 16:23:42.119585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:02.748 [2024-07-15 16:23:42.119610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.748 #36 NEW cov: 12177 ft: 14539 corp: 26/1968b lim: 100 exec/s: 36 rss: 73Mb L: 62/98 MS: 1 EraseBytes- 00:07:02.748 [2024-07-15 16:23:42.169459] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.748 [2024-07-15 16:23:42.169490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.748 [2024-07-15 16:23:42.169579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.748 [2024-07-15 16:23:42.169603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.748 [2024-07-15 16:23:42.169720] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:02.748 [2024-07-15 16:23:42.169739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.748 #37 NEW cov: 12177 ft: 14652 corp: 27/2033b lim: 100 exec/s: 37 rss: 73Mb L: 65/98 MS: 1 InsertByte- 00:07:02.748 [2024-07-15 16:23:42.219868] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.748 [2024-07-15 16:23:42.219900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.748 [2024-07-15 16:23:42.219973] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.748 [2024-07-15 16:23:42.219994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.748 [2024-07-15 16:23:42.220109] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:02.748 [2024-07-15 16:23:42.220129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.748 [2024-07-15 16:23:42.220248] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:02.748 [2024-07-15 16:23:42.220270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.748 #38 NEW cov: 12177 ft: 14678 corp: 28/2130b lim: 100 exec/s: 38 rss: 73Mb L: 97/98 MS: 1 InsertRepeatedBytes- 00:07:02.748 [2024-07-15 16:23:42.269646] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.748 [2024-07-15 16:23:42.269674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.748 [2024-07-15 16:23:42.269751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.748 [2024-07-15 16:23:42.269770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.748 [2024-07-15 16:23:42.269895] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:02.748 [2024-07-15 16:23:42.269914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.748 [2024-07-15 16:23:42.270035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:02.748 [2024-07-15 16:23:42.270062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.748 #39 NEW cov: 12177 ft: 14701 corp: 29/2226b lim: 100 exec/s: 39 rss: 73Mb L: 96/98 MS: 1 InsertByte- 00:07:02.748 [2024-07-15 16:23:42.319805] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.748 [2024-07-15 16:23:42.319834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.748 [2024-07-15 16:23:42.319969] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.748 [2024-07-15 16:23:42.319992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.007 #40 NEW cov: 12177 ft: 14711 corp: 30/2285b lim: 100 exec/s: 40 rss: 74Mb L: 59/98 MS: 1 CMP- DE: "\000\000\000\000\000\000\000H"- 00:07:03.007 [2024-07-15 16:23:42.380342] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:03.007 [2024-07-15 16:23:42.380375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.007 [2024-07-15 16:23:42.380465] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:03.007 [2024-07-15 16:23:42.380488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.007 [2024-07-15 16:23:42.380597] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:03.008 [2024-07-15 16:23:42.380619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.008 [2024-07-15 16:23:42.380733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:03.008 [2024-07-15 16:23:42.380757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.008 #41 NEW cov: 12177 ft: 14725 corp: 31/2372b lim: 100 exec/s: 41 rss: 74Mb L: 87/98 MS: 1 ChangeBit- 00:07:03.008 [2024-07-15 16:23:42.419996] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:03.008 [2024-07-15 16:23:42.420029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.008 [2024-07-15 16:23:42.420155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:03.008 [2024-07-15 16:23:42.420178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.008 #42 NEW cov: 12177 ft: 14738 corp: 32/2431b lim: 100 exec/s: 42 rss: 74Mb L: 59/98 MS: 1 ChangeByte- 00:07:03.008 [2024-07-15 16:23:42.470631] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:03.008 [2024-07-15 16:23:42.470665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.008 [2024-07-15 16:23:42.470780] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:03.008 [2024-07-15 16:23:42.470799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.008 [2024-07-15 16:23:42.470917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:03.008 [2024-07-15 16:23:42.470943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.008 [2024-07-15 16:23:42.471060] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:03.008 [2024-07-15 16:23:42.471082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.008 #43 NEW cov: 12177 ft: 14742 corp: 33/2519b lim: 100 exec/s: 21 rss: 74Mb L: 88/98 MS: 1 ShuffleBytes- 00:07:03.008 #43 DONE cov: 12177 ft: 14742 corp: 33/2519b lim: 100 exec/s: 21 rss: 74Mb 00:07:03.008 ###### Recommended dictionary. ###### 00:07:03.008 "\377\377~\303L\025\3375" # Uses: 0 00:07:03.008 "\301\325\010\004\205\037+\000" # Uses: 0 00:07:03.008 "\000\000\000\000\000\000\000H" # Uses: 0 00:07:03.008 ###### End of recommended dictionary. ###### 00:07:03.008 Done 43 runs in 2 second(s) 00:07:03.267 16:23:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:07:03.267 16:23:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:03.267 16:23:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:03.267 16:23:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:07:03.267 16:23:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:07:03.268 16:23:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:03.268 16:23:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:03.268 16:23:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:03.268 16:23:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:07:03.268 16:23:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:03.268 16:23:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:03.268 16:23:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:07:03.268 16:23:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:07:03.268 16:23:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:03.268 16:23:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:07:03.268 16:23:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:03.268 16:23:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:03.268 16:23:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:03.268 16:23:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:07:03.268 [2024-07-15 16:23:42.675476] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:07:03.268 [2024-07-15 16:23:42.675566] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2030686 ] 00:07:03.268 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.268 [2024-07-15 16:23:42.849772] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.528 [2024-07-15 16:23:42.916864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.528 [2024-07-15 16:23:42.975564] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:03.528 [2024-07-15 16:23:42.991876] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:07:03.528 INFO: Running with entropic power schedule (0xFF, 100). 00:07:03.528 INFO: Seed: 1039591536 00:07:03.528 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:03.528 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:03.528 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:03.528 INFO: A corpus is not provided, starting from an empty corpus 00:07:03.528 #2 INITED exec/s: 0 rss: 63Mb 00:07:03.528 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:03.528 This may also happen if the target rejected all inputs we tried so far 00:07:03.528 [2024-07-15 16:23:43.036510] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198712138850528 len:57569 00:07:03.528 [2024-07-15 16:23:43.036545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.528 [2024-07-15 16:23:43.036595] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 00:07:03.528 [2024-07-15 16:23:43.036614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.528 [2024-07-15 16:23:43.036642] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16204198715729174752 len:57569 00:07:03.528 [2024-07-15 16:23:43.036658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.788 NEW_FUNC[1/695]: 0x4a4570 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:07:03.788 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:03.788 #13 NEW cov: 11911 ft: 11912 corp: 2/39b lim: 50 exec/s: 0 rss: 70Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:07:03.788 [2024-07-15 16:23:43.377286] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:07:03.788 [2024-07-15 16:23:43.377329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.047 #15 NEW cov: 12041 ft: 12798 corp: 3/54b lim: 50 exec/s: 0 rss: 70Mb L: 15/38 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:04.047 [2024-07-15 16:23:43.437368] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198712138850528 len:57569 00:07:04.047 [2024-07-15 16:23:43.437398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.047 [2024-07-15 16:23:43.437452] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 00:07:04.047 [2024-07-15 16:23:43.437470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.047 [2024-07-15 16:23:43.437500] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16204198715729174752 len:57569 00:07:04.047 [2024-07-15 16:23:43.437516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.047 #21 NEW cov: 12047 ft: 12953 corp: 4/92b lim: 50 exec/s: 0 rss: 70Mb L: 38/38 MS: 1 ShuffleBytes- 00:07:04.047 [2024-07-15 16:23:43.517580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272304544910118542 len:36495 00:07:04.047 [2024-07-15 16:23:43.517610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.047 [2024-07-15 16:23:43.517642] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10272304543006887566 len:36495 00:07:04.047 [2024-07-15 16:23:43.517660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.047 [2024-07-15 16:23:43.517693] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744071806291598 len:65536 00:07:04.047 [2024-07-15 16:23:43.517710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.047 #28 NEW cov: 12132 ft: 13225 corp: 5/123b lim: 50 exec/s: 0 rss: 70Mb L: 31/38 MS: 2 EraseBytes-InsertRepeatedBytes- 00:07:04.047 [2024-07-15 16:23:43.597788] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198712138850528 len:57569 00:07:04.047 [2024-07-15 16:23:43.597817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.047 [2024-07-15 16:23:43.597862] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16206450515542860000 len:57569 00:07:04.047 [2024-07-15 16:23:43.597880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.047 [2024-07-15 16:23:43.597909] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16204198715729174752 len:57569 00:07:04.047 [2024-07-15 16:23:43.597925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.305 #34 NEW cov: 12132 ft: 13385 corp: 6/161b lim: 50 exec/s: 0 rss: 70Mb L: 38/38 MS: 1 ChangeBinInt- 00:07:04.305 [2024-07-15 16:23:43.677999] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198712138850528 len:57569 00:07:04.305 [2024-07-15 16:23:43.678029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.305 [2024-07-15 16:23:43.678075] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 00:07:04.305 [2024-07-15 16:23:43.678093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.305 [2024-07-15 16:23:43.678122] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16204198715729174752 len:57569 00:07:04.305 [2024-07-15 16:23:43.678138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.306 [2024-07-15 16:23:43.678165] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:16204198715729174752 len:57569 00:07:04.306 [2024-07-15 16:23:43.678181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.306 #35 NEW cov: 12132 ft: 13789 corp: 7/210b lim: 50 exec/s: 0 rss: 70Mb L: 49/49 MS: 1 CrossOver- 00:07:04.306 [2024-07-15 16:23:43.738148] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:07:04.306 [2024-07-15 16:23:43.738179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.306 [2024-07-15 16:23:43.738226] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:04.306 [2024-07-15 16:23:43.738247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.306 [2024-07-15 16:23:43.738276] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:04.306 [2024-07-15 16:23:43.738293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.306 #36 NEW cov: 12132 ft: 13912 corp: 8/247b lim: 50 exec/s: 0 rss: 70Mb L: 37/49 MS: 1 InsertRepeatedBytes- 00:07:04.306 [2024-07-15 16:23:43.788278] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:07:04.306 [2024-07-15 16:23:43.788308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.306 [2024-07-15 16:23:43.788355] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744070991642623 len:65536 00:07:04.306 [2024-07-15 16:23:43.788373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.306 [2024-07-15 16:23:43.788401] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:04.306 [2024-07-15 16:23:43.788417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.306 #37 NEW cov: 12132 ft: 13954 corp: 9/284b lim: 50 exec/s: 0 rss: 71Mb L: 37/49 MS: 1 ChangeByte- 00:07:04.306 [2024-07-15 16:23:43.868499] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198712138850528 len:57569 00:07:04.306 [2024-07-15 16:23:43.868530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.306 [2024-07-15 16:23:43.868576] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 00:07:04.306 [2024-07-15 16:23:43.868594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.306 [2024-07-15 16:23:43.868623] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16204198715729174752 len:57569 00:07:04.306 [2024-07-15 16:23:43.868639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.565 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:04.565 #38 NEW cov: 12149 ft: 13992 corp: 10/319b lim: 50 exec/s: 0 rss: 71Mb L: 35/49 MS: 1 EraseBytes- 00:07:04.565 [2024-07-15 16:23:43.948680] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18392137928227684351 len:65536 00:07:04.565 [2024-07-15 16:23:43.948710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.565 [2024-07-15 16:23:43.948756] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:04.565 [2024-07-15 16:23:43.948774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.565 [2024-07-15 16:23:43.948802] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:04.565 [2024-07-15 16:23:43.948818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.565 #39 NEW cov: 12149 ft: 14038 corp: 11/356b lim: 50 exec/s: 0 rss: 71Mb L: 37/49 MS: 1 ChangeByte- 00:07:04.565 [2024-07-15 16:23:43.998736] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198712138850528 len:57569 00:07:04.565 [2024-07-15 16:23:43.998765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.565 #40 NEW cov: 12149 ft: 14185 corp: 12/374b lim: 50 exec/s: 40 rss: 71Mb L: 18/49 MS: 1 CrossOver- 00:07:04.565 [2024-07-15 16:23:44.058887] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:783873591612596448 len:57569 00:07:04.565 [2024-07-15 16:23:44.058918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.565 #41 NEW cov: 12149 ft: 14197 corp: 13/392b lim: 50 exec/s: 41 rss: 71Mb L: 18/49 MS: 1 ShuffleBytes- 00:07:04.565 [2024-07-15 16:23:44.139089] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198712138850528 len:57569 00:07:04.565 [2024-07-15 16:23:44.139119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.824 #42 NEW cov: 12149 ft: 14218 corp: 14/410b lim: 50 exec/s: 42 rss: 71Mb L: 18/49 MS: 1 ChangeBit- 00:07:04.824 [2024-07-15 16:23:44.189335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198712138850528 len:57569 00:07:04.824 [2024-07-15 16:23:44.189365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.824 [2024-07-15 16:23:44.189410] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 00:07:04.824 [2024-07-15 16:23:44.189428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.824 [2024-07-15 16:23:44.189465] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16204198715729174752 len:57569 00:07:04.824 [2024-07-15 16:23:44.189481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.824 [2024-07-15 16:23:44.189509] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:16204198715729174752 len:57569 00:07:04.824 [2024-07-15 16:23:44.189524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.824 #43 NEW cov: 12149 ft: 14242 corp: 15/459b lim: 50 exec/s: 43 rss: 71Mb L: 49/49 MS: 1 ChangeBinInt- 00:07:04.824 [2024-07-15 16:23:44.239359] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198712258330624 len:57569 00:07:04.824 [2024-07-15 16:23:44.239389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.824 #44 NEW cov: 12149 ft: 14278 corp: 16/477b lim: 50 exec/s: 44 rss: 71Mb L: 18/49 MS: 1 ChangeBinInt- 00:07:04.824 [2024-07-15 16:23:44.289608] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198712138850528 len:57569 00:07:04.824 [2024-07-15 16:23:44.289638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.824 [2024-07-15 16:23:44.289683] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 00:07:04.824 [2024-07-15 16:23:44.289700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.824 [2024-07-15 16:23:44.289729] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14033993531092164832 len:49859 00:07:04.824 [2024-07-15 16:23:44.289744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.824 [2024-07-15 16:23:44.289771] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:16204198715729174752 len:57569 00:07:04.824 [2024-07-15 16:23:44.289787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.824 #45 NEW cov: 12149 ft: 14343 corp: 17/518b lim: 50 exec/s: 45 rss: 71Mb L: 41/49 MS: 1 InsertRepeatedBytes- 00:07:04.824 [2024-07-15 16:23:44.369818] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:783640688421036256 len:3342 00:07:04.824 [2024-07-15 16:23:44.369847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.824 [2024-07-15 16:23:44.369897] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:940422246894996749 len:3342 00:07:04.824 [2024-07-15 16:23:44.369915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.824 [2024-07-15 16:23:44.369943] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:940422246894996749 len:3342 00:07:04.824 [2024-07-15 16:23:44.369959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.824 [2024-07-15 16:23:44.369986] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:940655150086556941 len:57569 00:07:04.824 [2024-07-15 16:23:44.370001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.084 #46 NEW cov: 12149 ft: 14364 corp: 18/566b lim: 50 exec/s: 46 rss: 71Mb L: 48/49 MS: 1 InsertRepeatedBytes- 00:07:05.084 [2024-07-15 16:23:44.450032] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198712138850528 len:57569 00:07:05.084 [2024-07-15 16:23:44.450060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.084 [2024-07-15 16:23:44.450105] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 00:07:05.084 [2024-07-15 16:23:44.450123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.084 [2024-07-15 16:23:44.450152] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14033993531092164832 len:49859 00:07:05.084 [2024-07-15 16:23:44.450168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.084 [2024-07-15 16:23:44.450194] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:16204198715729174752 len:57569 00:07:05.084 [2024-07-15 16:23:44.450210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.084 #47 NEW cov: 12149 ft: 14384 corp: 19/611b lim: 50 exec/s: 47 rss: 71Mb L: 45/49 MS: 1 CMP- DE: "\377\377\377\377"- 00:07:05.084 [2024-07-15 16:23:44.530227] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18392137928227684351 len:65536 00:07:05.084 [2024-07-15 16:23:44.530256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.084 [2024-07-15 16:23:44.530288] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551423 len:65536 00:07:05.084 [2024-07-15 16:23:44.530305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.084 [2024-07-15 16:23:44.530334] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:05.084 [2024-07-15 16:23:44.530350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.084 #48 NEW cov: 12149 ft: 14397 corp: 20/648b lim: 50 exec/s: 48 rss: 71Mb L: 37/49 MS: 1 ChangeByte- 00:07:05.084 [2024-07-15 16:23:44.610403] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198712138850528 len:57569 00:07:05.084 [2024-07-15 16:23:44.610431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.084 [2024-07-15 16:23:44.610487] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 00:07:05.084 [2024-07-15 16:23:44.610508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.084 [2024-07-15 16:23:44.610537] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16204198032829374688 len:57569 00:07:05.084 [2024-07-15 16:23:44.610553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.084 #49 NEW cov: 12149 ft: 14422 corp: 21/686b lim: 50 exec/s: 49 rss: 71Mb L: 38/49 MS: 1 ChangeByte- 00:07:05.084 [2024-07-15 16:23:44.660541] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198712138850528 len:57569 00:07:05.084 [2024-07-15 16:23:44.660569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.084 [2024-07-15 16:23:44.660616] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 00:07:05.084 [2024-07-15 16:23:44.660633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.084 [2024-07-15 16:23:44.660662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16204198715729174537 len:57569 00:07:05.084 [2024-07-15 16:23:44.660678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.343 #50 NEW cov: 12149 ft: 14448 corp: 22/721b lim: 50 exec/s: 50 rss: 71Mb L: 35/49 MS: 1 ChangeByte- 00:07:05.343 [2024-07-15 16:23:44.710710] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:12769 00:07:05.343 [2024-07-15 16:23:44.710739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.343 [2024-07-15 16:23:44.710784] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 00:07:05.343 [2024-07-15 16:23:44.710801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.343 [2024-07-15 16:23:44.710830] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16204198715729174752 len:57569 00:07:05.343 [2024-07-15 16:23:44.710846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.343 [2024-07-15 16:23:44.710873] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:16204198715729174752 len:57569 00:07:05.343 [2024-07-15 16:23:44.710889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.343 #51 NEW cov: 12149 ft: 14508 corp: 23/770b lim: 50 exec/s: 51 rss: 71Mb L: 49/49 MS: 1 ChangeBinInt- 00:07:05.343 [2024-07-15 16:23:44.760810] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198712138850528 len:57569 00:07:05.343 [2024-07-15 16:23:44.760838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.343 [2024-07-15 16:23:44.760884] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 00:07:05.343 [2024-07-15 16:23:44.760901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.343 [2024-07-15 16:23:44.760930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16204198715729174752 len:57569 00:07:05.343 [2024-07-15 16:23:44.760946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.343 #52 NEW cov: 12149 ft: 14525 corp: 24/808b lim: 50 exec/s: 52 rss: 71Mb L: 38/49 MS: 1 ShuffleBytes- 00:07:05.343 [2024-07-15 16:23:44.810980] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18392137928227684351 len:65536 00:07:05.343 [2024-07-15 16:23:44.811009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.343 [2024-07-15 16:23:44.811053] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:05.343 [2024-07-15 16:23:44.811071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.343 [2024-07-15 16:23:44.811099] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:1 00:07:05.343 [2024-07-15 16:23:44.811115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.343 [2024-07-15 16:23:44.811142] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:1095216660480 len:65536 00:07:05.343 [2024-07-15 16:23:44.811158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.343 #53 NEW cov: 12149 ft: 14539 corp: 25/854b lim: 50 exec/s: 53 rss: 71Mb L: 46/49 MS: 1 InsertRepeatedBytes- 00:07:05.343 [2024-07-15 16:23:44.861105] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18392137928227684351 len:65536 00:07:05.343 [2024-07-15 16:23:44.861135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.344 [2024-07-15 16:23:44.861181] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551423 len:65536 00:07:05.344 [2024-07-15 16:23:44.861198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.344 [2024-07-15 16:23:44.861226] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744072568700927 len:65536 00:07:05.344 [2024-07-15 16:23:44.861243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.344 #54 NEW cov: 12156 ft: 14555 corp: 26/892b lim: 50 exec/s: 54 rss: 71Mb L: 38/49 MS: 1 InsertByte- 00:07:05.602 [2024-07-15 16:23:44.941411] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:746767466471940320 len:3342 00:07:05.602 [2024-07-15 16:23:44.941448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.602 [2024-07-15 16:23:44.941497] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:940422246894996749 len:3342 00:07:05.602 [2024-07-15 16:23:44.941525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.602 [2024-07-15 16:23:44.941555] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:940422246894996749 len:3342 00:07:05.602 [2024-07-15 16:23:44.941572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.602 [2024-07-15 16:23:44.941599] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:940655150086556941 len:57569 00:07:05.602 [2024-07-15 16:23:44.941614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.602 #55 NEW cov: 12156 ft: 14614 corp: 27/940b lim: 50 exec/s: 55 rss: 72Mb L: 48/49 MS: 1 ChangeByte- 00:07:05.602 [2024-07-15 16:23:45.021613] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198712138850528 len:57569 00:07:05.602 [2024-07-15 16:23:45.021647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.602 [2024-07-15 16:23:45.021679] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 00:07:05.602 [2024-07-15 16:23:45.021696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.602 [2024-07-15 16:23:45.021724] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2242545361218182943 len:8161 00:07:05.602 [2024-07-15 16:23:45.021740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.602 [2024-07-15 16:23:45.021767] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:16204198715729174752 len:57569 00:07:05.602 [2024-07-15 16:23:45.021782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.602 #56 NEW cov: 12156 ft: 14625 corp: 28/989b lim: 50 exec/s: 28 rss: 72Mb L: 49/49 MS: 1 ChangeBinInt- 00:07:05.602 #56 DONE cov: 12156 ft: 14625 corp: 28/989b lim: 50 exec/s: 28 rss: 72Mb 00:07:05.602 ###### Recommended dictionary. ###### 00:07:05.602 "\377\377\377\377" # Uses: 0 00:07:05.602 ###### End of recommended dictionary. ###### 00:07:05.602 Done 56 runs in 2 second(s) 00:07:05.602 16:23:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:07:05.602 16:23:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:05.602 16:23:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:05.602 16:23:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:07:05.602 16:23:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:07:05.602 16:23:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:05.602 16:23:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:05.602 16:23:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:05.602 16:23:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:07:05.602 16:23:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:05.602 16:23:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:05.602 16:23:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:07:05.602 16:23:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:07:05.602 16:23:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:05.602 16:23:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:07:05.602 16:23:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:05.862 16:23:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:05.862 16:23:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:05.862 16:23:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:07:05.862 [2024-07-15 16:23:45.224767] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:07:05.862 [2024-07-15 16:23:45.224836] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2031019 ] 00:07:05.862 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.862 [2024-07-15 16:23:45.398467] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.121 [2024-07-15 16:23:45.464435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.121 [2024-07-15 16:23:45.523295] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.121 [2024-07-15 16:23:45.539601] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:06.121 INFO: Running with entropic power schedule (0xFF, 100). 00:07:06.121 INFO: Seed: 3586586246 00:07:06.121 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:06.121 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:06.121 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:06.121 INFO: A corpus is not provided, starting from an empty corpus 00:07:06.121 #2 INITED exec/s: 0 rss: 63Mb 00:07:06.121 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:06.121 This may also happen if the target rejected all inputs we tried so far 00:07:06.121 [2024-07-15 16:23:45.595110] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:06.121 [2024-07-15 16:23:45.595144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.121 [2024-07-15 16:23:45.595201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:06.121 [2024-07-15 16:23:45.595218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.121 [2024-07-15 16:23:45.595272] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:06.121 [2024-07-15 16:23:45.595289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.121 [2024-07-15 16:23:45.595346] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:06.121 [2024-07-15 16:23:45.595364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.381 NEW_FUNC[1/697]: 0x4a6130 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:07:06.381 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:06.381 #18 NEW cov: 11966 ft: 11961 corp: 2/77b lim: 90 exec/s: 0 rss: 70Mb L: 76/76 MS: 1 InsertRepeatedBytes- 00:07:06.381 [2024-07-15 16:23:45.926207] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:06.381 [2024-07-15 16:23:45.926268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.381 [2024-07-15 16:23:45.926356] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:06.381 [2024-07-15 16:23:45.926387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.381 [2024-07-15 16:23:45.926475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:06.381 [2024-07-15 16:23:45.926505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.381 [2024-07-15 16:23:45.926586] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:06.381 [2024-07-15 16:23:45.926616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.381 #19 NEW cov: 12099 ft: 12642 corp: 3/154b lim: 90 exec/s: 0 rss: 71Mb L: 77/77 MS: 1 InsertByte- 00:07:06.640 [2024-07-15 16:23:45.986031] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:06.641 [2024-07-15 16:23:45.986061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.641 [2024-07-15 16:23:45.986100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:06.641 [2024-07-15 16:23:45.986114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.641 [2024-07-15 16:23:45.986166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:06.641 [2024-07-15 16:23:45.986182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.641 [2024-07-15 16:23:45.986233] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:06.641 [2024-07-15 16:23:45.986248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.641 #20 NEW cov: 12105 ft: 12919 corp: 4/231b lim: 90 exec/s: 0 rss: 71Mb L: 77/77 MS: 1 ChangeBinInt- 00:07:06.641 [2024-07-15 16:23:46.036168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:06.641 [2024-07-15 16:23:46.036197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.641 [2024-07-15 16:23:46.036235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:06.641 [2024-07-15 16:23:46.036251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.641 [2024-07-15 16:23:46.036306] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:06.641 [2024-07-15 16:23:46.036322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.641 [2024-07-15 16:23:46.036375] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:06.641 [2024-07-15 16:23:46.036389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.641 #21 NEW cov: 12190 ft: 13207 corp: 5/307b lim: 90 exec/s: 0 rss: 71Mb L: 76/77 MS: 1 ChangeBinInt- 00:07:06.641 [2024-07-15 16:23:46.076452] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:06.641 [2024-07-15 16:23:46.076480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.641 [2024-07-15 16:23:46.076527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:06.641 [2024-07-15 16:23:46.076542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.641 [2024-07-15 16:23:46.076596] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:06.641 [2024-07-15 16:23:46.076612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.641 [2024-07-15 16:23:46.076666] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:06.641 [2024-07-15 16:23:46.076680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.641 [2024-07-15 16:23:46.076735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:06.641 [2024-07-15 16:23:46.076750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:06.641 #22 NEW cov: 12190 ft: 13367 corp: 6/397b lim: 90 exec/s: 0 rss: 71Mb L: 90/90 MS: 1 InsertRepeatedBytes- 00:07:06.641 [2024-07-15 16:23:46.116407] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:06.641 [2024-07-15 16:23:46.116433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.641 [2024-07-15 16:23:46.116486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:06.641 [2024-07-15 16:23:46.116501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.641 [2024-07-15 16:23:46.116554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:06.641 [2024-07-15 16:23:46.116569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.641 [2024-07-15 16:23:46.116623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:06.641 [2024-07-15 16:23:46.116636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.641 #23 NEW cov: 12190 ft: 13494 corp: 7/474b lim: 90 exec/s: 0 rss: 71Mb L: 77/90 MS: 1 ChangeBit- 00:07:06.641 [2024-07-15 16:23:46.156704] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:06.641 [2024-07-15 16:23:46.156731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.641 [2024-07-15 16:23:46.156781] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:06.641 [2024-07-15 16:23:46.156796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.641 [2024-07-15 16:23:46.156847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:06.641 [2024-07-15 16:23:46.156863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.641 [2024-07-15 16:23:46.156915] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:06.641 [2024-07-15 16:23:46.156930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.641 [2024-07-15 16:23:46.156985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:06.641 [2024-07-15 16:23:46.157001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:06.641 #24 NEW cov: 12190 ft: 13524 corp: 8/564b lim: 90 exec/s: 0 rss: 71Mb L: 90/90 MS: 1 ChangeBit- 00:07:06.641 [2024-07-15 16:23:46.206667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:06.641 [2024-07-15 16:23:46.206695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.641 [2024-07-15 16:23:46.206737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:06.641 [2024-07-15 16:23:46.206753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.641 [2024-07-15 16:23:46.206804] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:06.641 [2024-07-15 16:23:46.206836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.641 [2024-07-15 16:23:46.206888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:06.641 [2024-07-15 16:23:46.206903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.928 #25 NEW cov: 12190 ft: 13587 corp: 9/640b lim: 90 exec/s: 0 rss: 71Mb L: 76/90 MS: 1 ChangeBinInt- 00:07:06.928 [2024-07-15 16:23:46.256821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:06.928 [2024-07-15 16:23:46.256849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.928 [2024-07-15 16:23:46.256896] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:06.928 [2024-07-15 16:23:46.256912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.928 [2024-07-15 16:23:46.256964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:06.928 [2024-07-15 16:23:46.256980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.928 [2024-07-15 16:23:46.257035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:06.928 [2024-07-15 16:23:46.257050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.928 #26 NEW cov: 12190 ft: 13667 corp: 10/717b lim: 90 exec/s: 0 rss: 71Mb L: 77/90 MS: 1 ChangeByte- 00:07:06.928 [2024-07-15 16:23:46.296932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:06.928 [2024-07-15 16:23:46.296960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.928 [2024-07-15 16:23:46.297003] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:06.928 [2024-07-15 16:23:46.297019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.928 [2024-07-15 16:23:46.297072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:06.928 [2024-07-15 16:23:46.297087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.928 [2024-07-15 16:23:46.297139] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:06.928 [2024-07-15 16:23:46.297154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.928 #27 NEW cov: 12190 ft: 13745 corp: 11/793b lim: 90 exec/s: 0 rss: 71Mb L: 76/90 MS: 1 CopyPart- 00:07:06.928 [2024-07-15 16:23:46.337161] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:06.928 [2024-07-15 16:23:46.337188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.928 [2024-07-15 16:23:46.337234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:06.928 [2024-07-15 16:23:46.337250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.928 [2024-07-15 16:23:46.337303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:06.928 [2024-07-15 16:23:46.337317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.928 [2024-07-15 16:23:46.337370] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:06.928 [2024-07-15 16:23:46.337385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.928 [2024-07-15 16:23:46.337440] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:06.928 [2024-07-15 16:23:46.337460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:06.928 #28 NEW cov: 12190 ft: 13789 corp: 12/883b lim: 90 exec/s: 0 rss: 71Mb L: 90/90 MS: 1 CopyPart- 00:07:06.928 [2024-07-15 16:23:46.377129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:06.928 [2024-07-15 16:23:46.377156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.928 [2024-07-15 16:23:46.377204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:06.928 [2024-07-15 16:23:46.377220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.928 [2024-07-15 16:23:46.377275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:06.928 [2024-07-15 16:23:46.377290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.928 [2024-07-15 16:23:46.377344] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:06.928 [2024-07-15 16:23:46.377359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.928 #29 NEW cov: 12190 ft: 13813 corp: 13/961b lim: 90 exec/s: 0 rss: 71Mb L: 78/90 MS: 1 InsertByte- 00:07:06.928 [2024-07-15 16:23:46.417232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:06.928 [2024-07-15 16:23:46.417259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.928 [2024-07-15 16:23:46.417305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:06.928 [2024-07-15 16:23:46.417321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.928 [2024-07-15 16:23:46.417375] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:06.928 [2024-07-15 16:23:46.417390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.928 [2024-07-15 16:23:46.417448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:06.929 [2024-07-15 16:23:46.417464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.929 #30 NEW cov: 12190 ft: 13844 corp: 14/1037b lim: 90 exec/s: 0 rss: 72Mb L: 76/90 MS: 1 ShuffleBytes- 00:07:06.929 [2024-07-15 16:23:46.467397] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:06.929 [2024-07-15 16:23:46.467424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.929 [2024-07-15 16:23:46.467474] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:06.929 [2024-07-15 16:23:46.467490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.929 [2024-07-15 16:23:46.467558] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:06.929 [2024-07-15 16:23:46.467574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.929 [2024-07-15 16:23:46.467627] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:06.929 [2024-07-15 16:23:46.467642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.188 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:07.188 #31 NEW cov: 12213 ft: 13898 corp: 15/1114b lim: 90 exec/s: 0 rss: 72Mb L: 77/90 MS: 1 ChangeByte- 00:07:07.188 [2024-07-15 16:23:46.517713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.188 [2024-07-15 16:23:46.517744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.188 [2024-07-15 16:23:46.517800] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.188 [2024-07-15 16:23:46.517817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.188 [2024-07-15 16:23:46.517870] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.188 [2024-07-15 16:23:46.517885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.188 [2024-07-15 16:23:46.517936] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.188 [2024-07-15 16:23:46.517950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.188 [2024-07-15 16:23:46.518005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:07.188 [2024-07-15 16:23:46.518019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:07.188 #32 NEW cov: 12213 ft: 13939 corp: 16/1204b lim: 90 exec/s: 0 rss: 72Mb L: 90/90 MS: 1 ChangeBinInt- 00:07:07.188 [2024-07-15 16:23:46.557361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.188 [2024-07-15 16:23:46.557388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.188 [2024-07-15 16:23:46.557429] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.188 [2024-07-15 16:23:46.557450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.188 #33 NEW cov: 12213 ft: 14402 corp: 17/1256b lim: 90 exec/s: 33 rss: 72Mb L: 52/90 MS: 1 EraseBytes- 00:07:07.188 [2024-07-15 16:23:46.607774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.189 [2024-07-15 16:23:46.607802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.189 [2024-07-15 16:23:46.607848] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.189 [2024-07-15 16:23:46.607863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.189 [2024-07-15 16:23:46.607915] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.189 [2024-07-15 16:23:46.607930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.189 [2024-07-15 16:23:46.607982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.189 [2024-07-15 16:23:46.607996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.189 #34 NEW cov: 12213 ft: 14426 corp: 18/1343b lim: 90 exec/s: 34 rss: 72Mb L: 87/90 MS: 1 InsertRepeatedBytes- 00:07:07.189 [2024-07-15 16:23:46.657940] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.189 [2024-07-15 16:23:46.657966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.189 [2024-07-15 16:23:46.658013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.189 [2024-07-15 16:23:46.658029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.189 [2024-07-15 16:23:46.658085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.189 [2024-07-15 16:23:46.658101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.189 [2024-07-15 16:23:46.658152] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.189 [2024-07-15 16:23:46.658168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.189 #35 NEW cov: 12213 ft: 14432 corp: 19/1420b lim: 90 exec/s: 35 rss: 72Mb L: 77/90 MS: 1 InsertByte- 00:07:07.189 [2024-07-15 16:23:46.698044] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.189 [2024-07-15 16:23:46.698071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.189 [2024-07-15 16:23:46.698117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.189 [2024-07-15 16:23:46.698133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.189 [2024-07-15 16:23:46.698186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.189 [2024-07-15 16:23:46.698201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.189 [2024-07-15 16:23:46.698256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.189 [2024-07-15 16:23:46.698270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.189 #36 NEW cov: 12213 ft: 14468 corp: 20/1498b lim: 90 exec/s: 36 rss: 72Mb L: 78/90 MS: 1 ChangeByte- 00:07:07.189 [2024-07-15 16:23:46.738284] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.189 [2024-07-15 16:23:46.738311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.189 [2024-07-15 16:23:46.738381] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.189 [2024-07-15 16:23:46.738397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.189 [2024-07-15 16:23:46.738455] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.189 [2024-07-15 16:23:46.738471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.189 [2024-07-15 16:23:46.738524] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.189 [2024-07-15 16:23:46.738540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.189 [2024-07-15 16:23:46.738593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:07.189 [2024-07-15 16:23:46.738609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:07.189 #37 NEW cov: 12213 ft: 14483 corp: 21/1588b lim: 90 exec/s: 37 rss: 72Mb L: 90/90 MS: 1 ChangeBinInt- 00:07:07.189 [2024-07-15 16:23:46.778447] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.189 [2024-07-15 16:23:46.778474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.189 [2024-07-15 16:23:46.778531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.189 [2024-07-15 16:23:46.778546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.189 [2024-07-15 16:23:46.778602] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.189 [2024-07-15 16:23:46.778618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.189 [2024-07-15 16:23:46.778671] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.189 [2024-07-15 16:23:46.778686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.189 [2024-07-15 16:23:46.778742] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:07.189 [2024-07-15 16:23:46.778757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:07.448 #38 NEW cov: 12213 ft: 14510 corp: 22/1678b lim: 90 exec/s: 38 rss: 72Mb L: 90/90 MS: 1 ChangeByte- 00:07:07.448 [2024-07-15 16:23:46.818085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.448 [2024-07-15 16:23:46.818112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.448 [2024-07-15 16:23:46.818156] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.448 [2024-07-15 16:23:46.818171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.448 #39 NEW cov: 12213 ft: 14532 corp: 23/1718b lim: 90 exec/s: 39 rss: 72Mb L: 40/90 MS: 1 EraseBytes- 00:07:07.448 [2024-07-15 16:23:46.858500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.448 [2024-07-15 16:23:46.858528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.448 [2024-07-15 16:23:46.858569] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.448 [2024-07-15 16:23:46.858583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.448 [2024-07-15 16:23:46.858636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.448 [2024-07-15 16:23:46.858651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.448 [2024-07-15 16:23:46.858705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.448 [2024-07-15 16:23:46.858720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.448 #40 NEW cov: 12213 ft: 14539 corp: 24/1807b lim: 90 exec/s: 40 rss: 72Mb L: 89/90 MS: 1 InsertRepeatedBytes- 00:07:07.448 [2024-07-15 16:23:46.898467] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.448 [2024-07-15 16:23:46.898495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.448 [2024-07-15 16:23:46.898531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.448 [2024-07-15 16:23:46.898546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.448 [2024-07-15 16:23:46.898597] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.448 [2024-07-15 16:23:46.898612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.448 #41 NEW cov: 12213 ft: 14811 corp: 25/1869b lim: 90 exec/s: 41 rss: 72Mb L: 62/90 MS: 1 EraseBytes- 00:07:07.448 [2024-07-15 16:23:46.948903] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.448 [2024-07-15 16:23:46.948934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.448 [2024-07-15 16:23:46.948982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.448 [2024-07-15 16:23:46.948997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.448 [2024-07-15 16:23:46.949051] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.448 [2024-07-15 16:23:46.949064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.448 [2024-07-15 16:23:46.949115] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.448 [2024-07-15 16:23:46.949131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.448 [2024-07-15 16:23:46.949185] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:07.448 [2024-07-15 16:23:46.949200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:07.448 #42 NEW cov: 12213 ft: 14837 corp: 26/1959b lim: 90 exec/s: 42 rss: 72Mb L: 90/90 MS: 1 ChangeBit- 00:07:07.448 [2024-07-15 16:23:46.999049] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.448 [2024-07-15 16:23:46.999076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.448 [2024-07-15 16:23:46.999130] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.448 [2024-07-15 16:23:46.999145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.448 [2024-07-15 16:23:46.999212] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.448 [2024-07-15 16:23:46.999228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.448 [2024-07-15 16:23:46.999280] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.448 [2024-07-15 16:23:46.999295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.448 [2024-07-15 16:23:46.999347] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:07.448 [2024-07-15 16:23:46.999363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:07.448 #43 NEW cov: 12213 ft: 14852 corp: 27/2049b lim: 90 exec/s: 43 rss: 72Mb L: 90/90 MS: 1 InsertRepeatedBytes- 00:07:07.707 [2024-07-15 16:23:47.049027] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.707 [2024-07-15 16:23:47.049055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.707 [2024-07-15 16:23:47.049118] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.707 [2024-07-15 16:23:47.049134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.707 [2024-07-15 16:23:47.049187] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.707 [2024-07-15 16:23:47.049200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.707 [2024-07-15 16:23:47.049254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.707 [2024-07-15 16:23:47.049271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.707 #44 NEW cov: 12213 ft: 14860 corp: 28/2126b lim: 90 exec/s: 44 rss: 73Mb L: 77/90 MS: 1 ChangeBit- 00:07:07.707 [2024-07-15 16:23:47.089359] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.707 [2024-07-15 16:23:47.089387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.707 [2024-07-15 16:23:47.089445] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.707 [2024-07-15 16:23:47.089460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.707 [2024-07-15 16:23:47.089511] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.707 [2024-07-15 16:23:47.089527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.707 [2024-07-15 16:23:47.089579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.707 [2024-07-15 16:23:47.089593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.707 [2024-07-15 16:23:47.089646] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:07.707 [2024-07-15 16:23:47.089662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:07.707 #45 NEW cov: 12213 ft: 14875 corp: 29/2216b lim: 90 exec/s: 45 rss: 73Mb L: 90/90 MS: 1 CMP- DE: "\377*\037\207\275\241\351\330"- 00:07:07.707 [2024-07-15 16:23:47.128956] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.707 [2024-07-15 16:23:47.128984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.707 [2024-07-15 16:23:47.129037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.707 [2024-07-15 16:23:47.129052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.707 #46 NEW cov: 12213 ft: 14888 corp: 30/2262b lim: 90 exec/s: 46 rss: 73Mb L: 46/90 MS: 1 CrossOver- 00:07:07.707 [2024-07-15 16:23:47.169355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.707 [2024-07-15 16:23:47.169383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.707 [2024-07-15 16:23:47.169429] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.707 [2024-07-15 16:23:47.169449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.707 [2024-07-15 16:23:47.169504] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.707 [2024-07-15 16:23:47.169519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.707 [2024-07-15 16:23:47.169573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.707 [2024-07-15 16:23:47.169587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.707 #47 NEW cov: 12213 ft: 14898 corp: 31/2339b lim: 90 exec/s: 47 rss: 73Mb L: 77/90 MS: 1 ChangeBinInt- 00:07:07.707 [2024-07-15 16:23:47.209494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.707 [2024-07-15 16:23:47.209522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.707 [2024-07-15 16:23:47.209562] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.707 [2024-07-15 16:23:47.209577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.707 [2024-07-15 16:23:47.209628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.707 [2024-07-15 16:23:47.209643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.707 [2024-07-15 16:23:47.209696] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.707 [2024-07-15 16:23:47.209712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.707 #50 NEW cov: 12213 ft: 14908 corp: 32/2427b lim: 90 exec/s: 50 rss: 73Mb L: 88/90 MS: 3 InsertByte-InsertByte-InsertRepeatedBytes- 00:07:07.707 [2024-07-15 16:23:47.249782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.707 [2024-07-15 16:23:47.249810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.707 [2024-07-15 16:23:47.249861] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.707 [2024-07-15 16:23:47.249877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.707 [2024-07-15 16:23:47.249942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.707 [2024-07-15 16:23:47.249958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.707 [2024-07-15 16:23:47.250012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.707 [2024-07-15 16:23:47.250026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.707 [2024-07-15 16:23:47.250080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:07.707 [2024-07-15 16:23:47.250094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:07.707 #51 NEW cov: 12213 ft: 14948 corp: 33/2517b lim: 90 exec/s: 51 rss: 73Mb L: 90/90 MS: 1 CrossOver- 00:07:07.707 [2024-07-15 16:23:47.299769] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.707 [2024-07-15 16:23:47.299797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.707 [2024-07-15 16:23:47.299843] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.707 [2024-07-15 16:23:47.299859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.707 [2024-07-15 16:23:47.299919] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.707 [2024-07-15 16:23:47.299935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.707 [2024-07-15 16:23:47.299988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.707 [2024-07-15 16:23:47.300004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.966 #52 NEW cov: 12213 ft: 14964 corp: 34/2605b lim: 90 exec/s: 52 rss: 73Mb L: 88/90 MS: 1 CopyPart- 00:07:07.966 [2024-07-15 16:23:47.349915] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.966 [2024-07-15 16:23:47.349943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.966 [2024-07-15 16:23:47.349983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.966 [2024-07-15 16:23:47.349998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.966 [2024-07-15 16:23:47.350050] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.966 [2024-07-15 16:23:47.350065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.966 [2024-07-15 16:23:47.350119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.966 [2024-07-15 16:23:47.350133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.966 #53 NEW cov: 12213 ft: 14970 corp: 35/2682b lim: 90 exec/s: 53 rss: 73Mb L: 77/90 MS: 1 PersAutoDict- DE: "\377*\037\207\275\241\351\330"- 00:07:07.966 [2024-07-15 16:23:47.400013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.966 [2024-07-15 16:23:47.400040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.966 [2024-07-15 16:23:47.400086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.966 [2024-07-15 16:23:47.400101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.966 [2024-07-15 16:23:47.400155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.966 [2024-07-15 16:23:47.400171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.966 [2024-07-15 16:23:47.400223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.966 [2024-07-15 16:23:47.400236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.966 #54 NEW cov: 12213 ft: 14979 corp: 36/2759b lim: 90 exec/s: 54 rss: 73Mb L: 77/90 MS: 1 InsertByte- 00:07:07.966 [2024-07-15 16:23:47.440269] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.966 [2024-07-15 16:23:47.440296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.966 [2024-07-15 16:23:47.440350] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.966 [2024-07-15 16:23:47.440364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.966 [2024-07-15 16:23:47.440417] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.966 [2024-07-15 16:23:47.440432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.966 [2024-07-15 16:23:47.440489] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.966 [2024-07-15 16:23:47.440504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.966 [2024-07-15 16:23:47.440557] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:07.966 [2024-07-15 16:23:47.440571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:07.966 #55 NEW cov: 12213 ft: 15001 corp: 37/2849b lim: 90 exec/s: 55 rss: 73Mb L: 90/90 MS: 1 ChangeBinInt- 00:07:07.966 [2024-07-15 16:23:47.490275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.966 [2024-07-15 16:23:47.490304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.966 [2024-07-15 16:23:47.490346] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.966 [2024-07-15 16:23:47.490361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.966 [2024-07-15 16:23:47.490415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.966 [2024-07-15 16:23:47.490428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.966 [2024-07-15 16:23:47.490487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.966 [2024-07-15 16:23:47.490502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.966 #56 NEW cov: 12213 ft: 15005 corp: 38/2937b lim: 90 exec/s: 56 rss: 73Mb L: 88/90 MS: 1 ShuffleBytes- 00:07:07.966 [2024-07-15 16:23:47.540590] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.966 [2024-07-15 16:23:47.540617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.966 [2024-07-15 16:23:47.540665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.966 [2024-07-15 16:23:47.540679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.966 [2024-07-15 16:23:47.540732] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.966 [2024-07-15 16:23:47.540763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.966 [2024-07-15 16:23:47.540817] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.966 [2024-07-15 16:23:47.540832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.966 [2024-07-15 16:23:47.540886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:07.966 [2024-07-15 16:23:47.540902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:08.224 #57 NEW cov: 12213 ft: 15080 corp: 39/3027b lim: 90 exec/s: 57 rss: 73Mb L: 90/90 MS: 1 ShuffleBytes- 00:07:08.224 [2024-07-15 16:23:47.580646] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:08.224 [2024-07-15 16:23:47.580673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.224 [2024-07-15 16:23:47.580727] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:08.224 [2024-07-15 16:23:47.580742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.224 [2024-07-15 16:23:47.580794] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:08.224 [2024-07-15 16:23:47.580809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.224 [2024-07-15 16:23:47.580862] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:08.224 [2024-07-15 16:23:47.580877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.224 [2024-07-15 16:23:47.580930] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:08.224 [2024-07-15 16:23:47.580947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:08.224 #58 NEW cov: 12213 ft: 15086 corp: 40/3117b lim: 90 exec/s: 29 rss: 74Mb L: 90/90 MS: 1 CopyPart- 00:07:08.224 #58 DONE cov: 12213 ft: 15086 corp: 40/3117b lim: 90 exec/s: 29 rss: 74Mb 00:07:08.224 ###### Recommended dictionary. ###### 00:07:08.224 "\377*\037\207\275\241\351\330" # Uses: 1 00:07:08.224 ###### End of recommended dictionary. ###### 00:07:08.224 Done 58 runs in 2 second(s) 00:07:08.224 16:23:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:07:08.224 16:23:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:08.224 16:23:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:08.224 16:23:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:07:08.224 16:23:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:07:08.224 16:23:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:08.224 16:23:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:08.224 16:23:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:08.224 16:23:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:07:08.224 16:23:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:08.224 16:23:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:08.224 16:23:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:07:08.224 16:23:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:07:08.224 16:23:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:08.224 16:23:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:07:08.224 16:23:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:08.224 16:23:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:08.224 16:23:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:08.224 16:23:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:07:08.224 [2024-07-15 16:23:47.784054] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:07:08.224 [2024-07-15 16:23:47.784123] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2031546 ] 00:07:08.224 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.482 [2024-07-15 16:23:47.959839] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.482 [2024-07-15 16:23:48.024715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.740 [2024-07-15 16:23:48.083466] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.740 [2024-07-15 16:23:48.099761] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:07:08.740 INFO: Running with entropic power schedule (0xFF, 100). 00:07:08.740 INFO: Seed: 1852613731 00:07:08.740 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:08.741 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:08.741 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:08.741 INFO: A corpus is not provided, starting from an empty corpus 00:07:08.741 #2 INITED exec/s: 0 rss: 63Mb 00:07:08.741 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:08.741 This may also happen if the target rejected all inputs we tried so far 00:07:08.741 [2024-07-15 16:23:48.144837] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:08.741 [2024-07-15 16:23:48.144868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.999 NEW_FUNC[1/696]: 0x4a9350 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:07:08.999 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:08.999 #3 NEW cov: 11943 ft: 11940 corp: 2/18b lim: 50 exec/s: 0 rss: 70Mb L: 17/17 MS: 1 InsertRepeatedBytes- 00:07:08.999 [2024-07-15 16:23:48.465764] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:08.999 [2024-07-15 16:23:48.465798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.999 [2024-07-15 16:23:48.465856] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:08.999 [2024-07-15 16:23:48.465872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.999 NEW_FUNC[1/1]: 0xf46e20 in rte_get_timer_cycles /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/include/generic/rte_cycles.h:94 00:07:08.999 #4 NEW cov: 12074 ft: 13076 corp: 3/41b lim: 50 exec/s: 0 rss: 71Mb L: 23/23 MS: 1 InsertRepeatedBytes- 00:07:08.999 [2024-07-15 16:23:48.526226] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:08.999 [2024-07-15 16:23:48.526254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.999 [2024-07-15 16:23:48.526316] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:08.999 [2024-07-15 16:23:48.526333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.999 [2024-07-15 16:23:48.526390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:08.999 [2024-07-15 16:23:48.526406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.999 [2024-07-15 16:23:48.526463] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:08.999 [2024-07-15 16:23:48.526479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.999 #8 NEW cov: 12080 ft: 13726 corp: 4/88b lim: 50 exec/s: 0 rss: 71Mb L: 47/47 MS: 4 ShuffleBytes-ChangeByte-ShuffleBytes-InsertRepeatedBytes- 00:07:08.999 [2024-07-15 16:23:48.566321] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:08.999 [2024-07-15 16:23:48.566349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.999 [2024-07-15 16:23:48.566396] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:08.999 [2024-07-15 16:23:48.566411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.999 [2024-07-15 16:23:48.566470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:08.999 [2024-07-15 16:23:48.566486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.999 [2024-07-15 16:23:48.566542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:08.999 [2024-07-15 16:23:48.566557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.257 #14 NEW cov: 12165 ft: 14016 corp: 5/135b lim: 50 exec/s: 0 rss: 71Mb L: 47/47 MS: 1 ChangeByte- 00:07:09.257 [2024-07-15 16:23:48.615980] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:09.257 [2024-07-15 16:23:48.616008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.257 #15 NEW cov: 12165 ft: 14262 corp: 6/149b lim: 50 exec/s: 0 rss: 71Mb L: 14/47 MS: 1 InsertRepeatedBytes- 00:07:09.257 [2024-07-15 16:23:48.656557] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:09.257 [2024-07-15 16:23:48.656585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.257 [2024-07-15 16:23:48.656633] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:09.257 [2024-07-15 16:23:48.656647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.257 [2024-07-15 16:23:48.656701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:09.257 [2024-07-15 16:23:48.656716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.257 [2024-07-15 16:23:48.656766] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:09.257 [2024-07-15 16:23:48.656781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.257 #16 NEW cov: 12165 ft: 14297 corp: 7/196b lim: 50 exec/s: 0 rss: 71Mb L: 47/47 MS: 1 CopyPart- 00:07:09.257 [2024-07-15 16:23:48.696177] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:09.257 [2024-07-15 16:23:48.696205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.257 #17 NEW cov: 12165 ft: 14356 corp: 8/214b lim: 50 exec/s: 0 rss: 71Mb L: 18/47 MS: 1 CrossOver- 00:07:09.257 [2024-07-15 16:23:48.736765] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:09.257 [2024-07-15 16:23:48.736792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.257 [2024-07-15 16:23:48.736838] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:09.257 [2024-07-15 16:23:48.736854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.257 [2024-07-15 16:23:48.736905] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:09.257 [2024-07-15 16:23:48.736936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.257 [2024-07-15 16:23:48.736990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:09.257 [2024-07-15 16:23:48.737006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.257 #18 NEW cov: 12165 ft: 14450 corp: 9/262b lim: 50 exec/s: 0 rss: 71Mb L: 48/48 MS: 1 CrossOver- 00:07:09.257 [2024-07-15 16:23:48.776457] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:09.257 [2024-07-15 16:23:48.776485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.257 #19 NEW cov: 12165 ft: 14470 corp: 10/279b lim: 50 exec/s: 0 rss: 71Mb L: 17/48 MS: 1 ChangeByte- 00:07:09.257 [2024-07-15 16:23:48.816556] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:09.257 [2024-07-15 16:23:48.816587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.515 #20 NEW cov: 12165 ft: 14606 corp: 11/298b lim: 50 exec/s: 0 rss: 71Mb L: 19/48 MS: 1 EraseBytes- 00:07:09.515 [2024-07-15 16:23:48.866655] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:09.515 [2024-07-15 16:23:48.866683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.515 #21 NEW cov: 12165 ft: 14634 corp: 12/317b lim: 50 exec/s: 0 rss: 71Mb L: 19/48 MS: 1 CopyPart- 00:07:09.515 [2024-07-15 16:23:48.916975] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:09.515 [2024-07-15 16:23:48.917001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.515 [2024-07-15 16:23:48.917068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:09.515 [2024-07-15 16:23:48.917084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.515 #22 NEW cov: 12165 ft: 14657 corp: 13/340b lim: 50 exec/s: 0 rss: 71Mb L: 23/48 MS: 1 ChangeBit- 00:07:09.515 [2024-07-15 16:23:48.957069] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:09.515 [2024-07-15 16:23:48.957096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.515 [2024-07-15 16:23:48.957148] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:09.515 [2024-07-15 16:23:48.957162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.515 #23 NEW cov: 12165 ft: 14677 corp: 14/363b lim: 50 exec/s: 0 rss: 71Mb L: 23/48 MS: 1 ChangeByte- 00:07:09.515 [2024-07-15 16:23:48.997468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:09.515 [2024-07-15 16:23:48.997495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.515 [2024-07-15 16:23:48.997548] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:09.515 [2024-07-15 16:23:48.997564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.515 [2024-07-15 16:23:48.997634] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:09.515 [2024-07-15 16:23:48.997650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.515 [2024-07-15 16:23:48.997702] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:09.515 [2024-07-15 16:23:48.997717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.516 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:09.516 #24 NEW cov: 12188 ft: 14694 corp: 15/412b lim: 50 exec/s: 0 rss: 71Mb L: 49/49 MS: 1 CMP- DE: "\032?"- 00:07:09.516 [2024-07-15 16:23:49.047659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:09.516 [2024-07-15 16:23:49.047686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.516 [2024-07-15 16:23:49.047732] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:09.516 [2024-07-15 16:23:49.047748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.516 [2024-07-15 16:23:49.047801] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:09.516 [2024-07-15 16:23:49.047819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.516 [2024-07-15 16:23:49.047872] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:09.516 [2024-07-15 16:23:49.047887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.516 #25 NEW cov: 12188 ft: 14774 corp: 16/459b lim: 50 exec/s: 0 rss: 71Mb L: 47/49 MS: 1 PersAutoDict- DE: "\032?"- 00:07:09.516 [2024-07-15 16:23:49.087285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:09.516 [2024-07-15 16:23:49.087312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.773 #26 NEW cov: 12188 ft: 14813 corp: 17/474b lim: 50 exec/s: 0 rss: 72Mb L: 15/49 MS: 1 InsertByte- 00:07:09.773 [2024-07-15 16:23:49.137886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:09.773 [2024-07-15 16:23:49.137915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.773 [2024-07-15 16:23:49.137954] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:09.773 [2024-07-15 16:23:49.137970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.773 [2024-07-15 16:23:49.138024] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:09.773 [2024-07-15 16:23:49.138040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.773 [2024-07-15 16:23:49.138098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:09.773 [2024-07-15 16:23:49.138114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.773 #27 NEW cov: 12188 ft: 14839 corp: 18/521b lim: 50 exec/s: 27 rss: 72Mb L: 47/49 MS: 1 CopyPart- 00:07:09.773 [2024-07-15 16:23:49.188005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:09.773 [2024-07-15 16:23:49.188032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.773 [2024-07-15 16:23:49.188079] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:09.773 [2024-07-15 16:23:49.188094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.773 [2024-07-15 16:23:49.188149] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:09.773 [2024-07-15 16:23:49.188164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.773 [2024-07-15 16:23:49.188217] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:09.773 [2024-07-15 16:23:49.188232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.773 #28 NEW cov: 12188 ft: 14867 corp: 19/568b lim: 50 exec/s: 28 rss: 72Mb L: 47/49 MS: 1 ChangeBinInt- 00:07:09.773 [2024-07-15 16:23:49.227964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:09.773 [2024-07-15 16:23:49.227991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.773 [2024-07-15 16:23:49.228047] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:09.773 [2024-07-15 16:23:49.228063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.773 [2024-07-15 16:23:49.228120] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:09.773 [2024-07-15 16:23:49.228135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.773 #31 NEW cov: 12188 ft: 15141 corp: 20/599b lim: 50 exec/s: 31 rss: 72Mb L: 31/49 MS: 3 ChangeByte-ChangeBinInt-InsertRepeatedBytes- 00:07:09.773 [2024-07-15 16:23:49.267938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:09.773 [2024-07-15 16:23:49.267964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.773 [2024-07-15 16:23:49.268017] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:09.773 [2024-07-15 16:23:49.268033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.773 #37 NEW cov: 12188 ft: 15156 corp: 21/626b lim: 50 exec/s: 37 rss: 72Mb L: 27/49 MS: 1 EraseBytes- 00:07:09.773 [2024-07-15 16:23:49.308007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:09.773 [2024-07-15 16:23:49.308034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.773 [2024-07-15 16:23:49.308090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:09.773 [2024-07-15 16:23:49.308106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.773 #38 NEW cov: 12188 ft: 15245 corp: 22/652b lim: 50 exec/s: 38 rss: 72Mb L: 26/49 MS: 1 EraseBytes- 00:07:09.773 [2024-07-15 16:23:49.348006] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:09.773 [2024-07-15 16:23:49.348032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.032 #39 NEW cov: 12188 ft: 15283 corp: 23/671b lim: 50 exec/s: 39 rss: 72Mb L: 19/49 MS: 1 ChangeByte- 00:07:10.032 [2024-07-15 16:23:49.388548] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.032 [2024-07-15 16:23:49.388574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.032 [2024-07-15 16:23:49.388629] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.032 [2024-07-15 16:23:49.388645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.032 [2024-07-15 16:23:49.388699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:10.032 [2024-07-15 16:23:49.388713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.032 [2024-07-15 16:23:49.388767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:10.032 [2024-07-15 16:23:49.388783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.032 #40 NEW cov: 12188 ft: 15300 corp: 24/718b lim: 50 exec/s: 40 rss: 72Mb L: 47/49 MS: 1 ChangeByte- 00:07:10.032 [2024-07-15 16:23:49.438249] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.032 [2024-07-15 16:23:49.438276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.032 #41 NEW cov: 12188 ft: 15356 corp: 25/737b lim: 50 exec/s: 41 rss: 72Mb L: 19/49 MS: 1 ChangeBinInt- 00:07:10.032 [2024-07-15 16:23:49.488843] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.032 [2024-07-15 16:23:49.488873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.032 [2024-07-15 16:23:49.488914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.032 [2024-07-15 16:23:49.488930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.032 [2024-07-15 16:23:49.488983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:10.032 [2024-07-15 16:23:49.488998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.032 [2024-07-15 16:23:49.489050] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:10.032 [2024-07-15 16:23:49.489064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.032 #42 NEW cov: 12188 ft: 15366 corp: 26/786b lim: 50 exec/s: 42 rss: 72Mb L: 49/49 MS: 1 CrossOver- 00:07:10.032 [2024-07-15 16:23:49.528534] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.032 [2024-07-15 16:23:49.528561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.032 #43 NEW cov: 12188 ft: 15382 corp: 27/805b lim: 50 exec/s: 43 rss: 72Mb L: 19/49 MS: 1 ChangeByte- 00:07:10.032 [2024-07-15 16:23:49.568794] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.032 [2024-07-15 16:23:49.568820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.032 [2024-07-15 16:23:49.568884] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.032 [2024-07-15 16:23:49.568900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.032 #44 NEW cov: 12188 ft: 15414 corp: 28/829b lim: 50 exec/s: 44 rss: 72Mb L: 24/49 MS: 1 EraseBytes- 00:07:10.032 [2024-07-15 16:23:49.619096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.032 [2024-07-15 16:23:49.619122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.032 [2024-07-15 16:23:49.619174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.032 [2024-07-15 16:23:49.619190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.032 [2024-07-15 16:23:49.619246] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:10.032 [2024-07-15 16:23:49.619261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.292 #48 NEW cov: 12188 ft: 15427 corp: 29/864b lim: 50 exec/s: 48 rss: 72Mb L: 35/49 MS: 4 CrossOver-ChangeBit-CrossOver-InsertRepeatedBytes- 00:07:10.292 [2024-07-15 16:23:49.659218] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.292 [2024-07-15 16:23:49.659244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.292 [2024-07-15 16:23:49.659291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.292 [2024-07-15 16:23:49.659306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.292 [2024-07-15 16:23:49.659358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:10.292 [2024-07-15 16:23:49.659373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.292 #49 NEW cov: 12188 ft: 15433 corp: 30/896b lim: 50 exec/s: 49 rss: 72Mb L: 32/49 MS: 1 EraseBytes- 00:07:10.292 [2024-07-15 16:23:49.709350] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.292 [2024-07-15 16:23:49.709376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.292 [2024-07-15 16:23:49.709419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.292 [2024-07-15 16:23:49.709434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.292 [2024-07-15 16:23:49.709494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:10.292 [2024-07-15 16:23:49.709510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.292 #50 NEW cov: 12188 ft: 15437 corp: 31/927b lim: 50 exec/s: 50 rss: 72Mb L: 31/49 MS: 1 EraseBytes- 00:07:10.292 [2024-07-15 16:23:49.759641] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.292 [2024-07-15 16:23:49.759668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.292 [2024-07-15 16:23:49.759713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.292 [2024-07-15 16:23:49.759729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.292 [2024-07-15 16:23:49.759783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:10.292 [2024-07-15 16:23:49.759798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.292 [2024-07-15 16:23:49.759852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:10.292 [2024-07-15 16:23:49.759866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.292 #51 NEW cov: 12188 ft: 15459 corp: 32/974b lim: 50 exec/s: 51 rss: 72Mb L: 47/49 MS: 1 PersAutoDict- DE: "\032?"- 00:07:10.292 [2024-07-15 16:23:49.809497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.292 [2024-07-15 16:23:49.809524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.292 [2024-07-15 16:23:49.809562] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.292 [2024-07-15 16:23:49.809578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.292 #52 NEW cov: 12188 ft: 15465 corp: 33/998b lim: 50 exec/s: 52 rss: 72Mb L: 24/49 MS: 1 ShuffleBytes- 00:07:10.292 [2024-07-15 16:23:49.859911] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.292 [2024-07-15 16:23:49.859936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.292 [2024-07-15 16:23:49.859985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.292 [2024-07-15 16:23:49.860000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.292 [2024-07-15 16:23:49.860055] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:10.292 [2024-07-15 16:23:49.860070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.292 [2024-07-15 16:23:49.860123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:10.292 [2024-07-15 16:23:49.860142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.292 #53 NEW cov: 12188 ft: 15480 corp: 34/1045b lim: 50 exec/s: 53 rss: 72Mb L: 47/49 MS: 1 ChangeBit- 00:07:10.550 [2024-07-15 16:23:49.899591] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.550 [2024-07-15 16:23:49.899617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.550 #54 NEW cov: 12188 ft: 15513 corp: 35/1063b lim: 50 exec/s: 54 rss: 73Mb L: 18/49 MS: 1 ChangeBinInt- 00:07:10.551 [2024-07-15 16:23:49.950132] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.551 [2024-07-15 16:23:49.950161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.551 [2024-07-15 16:23:49.950204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.551 [2024-07-15 16:23:49.950219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.551 [2024-07-15 16:23:49.950271] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:10.551 [2024-07-15 16:23:49.950287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.551 [2024-07-15 16:23:49.950344] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:10.551 [2024-07-15 16:23:49.950360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.551 #55 NEW cov: 12188 ft: 15518 corp: 36/1111b lim: 50 exec/s: 55 rss: 73Mb L: 48/49 MS: 1 CrossOver- 00:07:10.551 [2024-07-15 16:23:49.990118] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.551 [2024-07-15 16:23:49.990146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.551 [2024-07-15 16:23:49.990190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.551 [2024-07-15 16:23:49.990205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.551 [2024-07-15 16:23:49.990257] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:10.551 [2024-07-15 16:23:49.990273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.551 #56 NEW cov: 12188 ft: 15524 corp: 37/1142b lim: 50 exec/s: 56 rss: 73Mb L: 31/49 MS: 1 CopyPart- 00:07:10.551 [2024-07-15 16:23:50.040153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.551 [2024-07-15 16:23:50.040181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.551 [2024-07-15 16:23:50.040229] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.551 [2024-07-15 16:23:50.040245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.551 #57 NEW cov: 12188 ft: 15576 corp: 38/1168b lim: 50 exec/s: 57 rss: 73Mb L: 26/49 MS: 1 CrossOver- 00:07:10.551 [2024-07-15 16:23:50.080260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.551 [2024-07-15 16:23:50.080290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.551 [2024-07-15 16:23:50.080342] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.551 [2024-07-15 16:23:50.080361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.551 [2024-07-15 16:23:50.130389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.551 [2024-07-15 16:23:50.130418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.551 [2024-07-15 16:23:50.130478] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.551 [2024-07-15 16:23:50.130495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.810 [2024-07-15 16:23:50.170485] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.810 [2024-07-15 16:23:50.170512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.810 [2024-07-15 16:23:50.170568] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.810 [2024-07-15 16:23:50.170583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.810 #60 NEW cov: 12188 ft: 15599 corp: 39/1197b lim: 50 exec/s: 30 rss: 73Mb L: 29/49 MS: 3 CrossOver-ChangeBit-PersAutoDict- DE: "\032?"- 00:07:10.810 #60 DONE cov: 12188 ft: 15599 corp: 39/1197b lim: 50 exec/s: 30 rss: 73Mb 00:07:10.810 ###### Recommended dictionary. ###### 00:07:10.810 "\032?" # Uses: 4 00:07:10.810 ###### End of recommended dictionary. ###### 00:07:10.810 Done 60 runs in 2 second(s) 00:07:10.810 16:23:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:07:10.810 16:23:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:10.810 16:23:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:10.810 16:23:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:07:10.810 16:23:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:07:10.810 16:23:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:10.810 16:23:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:10.810 16:23:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:10.810 16:23:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:07:10.810 16:23:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:10.810 16:23:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:10.810 16:23:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:07:10.810 16:23:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:07:10.810 16:23:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:10.810 16:23:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:07:10.810 16:23:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:10.810 16:23:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:10.810 16:23:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:10.810 16:23:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:07:10.810 [2024-07-15 16:23:50.361488] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:07:10.810 [2024-07-15 16:23:50.361559] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2032000 ] 00:07:10.810 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.069 [2024-07-15 16:23:50.542179] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.069 [2024-07-15 16:23:50.608463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.327 [2024-07-15 16:23:50.667582] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.327 [2024-07-15 16:23:50.683877] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:07:11.327 INFO: Running with entropic power schedule (0xFF, 100). 00:07:11.327 INFO: Seed: 141642845 00:07:11.327 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:11.327 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:11.327 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:11.327 INFO: A corpus is not provided, starting from an empty corpus 00:07:11.328 #2 INITED exec/s: 0 rss: 63Mb 00:07:11.328 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:11.328 This may also happen if the target rejected all inputs we tried so far 00:07:11.328 [2024-07-15 16:23:50.729217] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:11.328 [2024-07-15 16:23:50.729247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.328 [2024-07-15 16:23:50.729282] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:11.328 [2024-07-15 16:23:50.729297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.328 [2024-07-15 16:23:50.729351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:11.328 [2024-07-15 16:23:50.729366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.586 NEW_FUNC[1/697]: 0x4ab610 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:07:11.586 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:11.586 #8 NEW cov: 11970 ft: 11969 corp: 2/57b lim: 85 exec/s: 0 rss: 70Mb L: 56/56 MS: 1 InsertRepeatedBytes- 00:07:11.586 [2024-07-15 16:23:51.050118] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:11.586 [2024-07-15 16:23:51.050152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.586 [2024-07-15 16:23:51.050209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:11.586 [2024-07-15 16:23:51.050225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.587 [2024-07-15 16:23:51.050280] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:11.587 [2024-07-15 16:23:51.050295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.587 #9 NEW cov: 12100 ft: 12438 corp: 3/113b lim: 85 exec/s: 0 rss: 70Mb L: 56/56 MS: 1 ChangeBit- 00:07:11.587 [2024-07-15 16:23:51.100334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:11.587 [2024-07-15 16:23:51.100363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.587 [2024-07-15 16:23:51.100407] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:11.587 [2024-07-15 16:23:51.100426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.587 [2024-07-15 16:23:51.100489] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:11.587 [2024-07-15 16:23:51.100505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.587 [2024-07-15 16:23:51.100566] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:11.587 [2024-07-15 16:23:51.100581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.587 #15 NEW cov: 12106 ft: 13052 corp: 4/194b lim: 85 exec/s: 0 rss: 70Mb L: 81/81 MS: 1 InsertRepeatedBytes- 00:07:11.587 [2024-07-15 16:23:51.150512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:11.587 [2024-07-15 16:23:51.150539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.587 [2024-07-15 16:23:51.150604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:11.587 [2024-07-15 16:23:51.150620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.587 [2024-07-15 16:23:51.150676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:11.587 [2024-07-15 16:23:51.150692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.587 [2024-07-15 16:23:51.150750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:11.587 [2024-07-15 16:23:51.150765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.846 #16 NEW cov: 12191 ft: 13395 corp: 5/275b lim: 85 exec/s: 0 rss: 70Mb L: 81/81 MS: 1 ChangeByte- 00:07:11.846 [2024-07-15 16:23:51.200647] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:11.846 [2024-07-15 16:23:51.200673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.846 [2024-07-15 16:23:51.200724] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:11.846 [2024-07-15 16:23:51.200740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.846 [2024-07-15 16:23:51.200795] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:11.846 [2024-07-15 16:23:51.200811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.846 [2024-07-15 16:23:51.200868] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:11.846 [2024-07-15 16:23:51.200881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.846 #19 NEW cov: 12191 ft: 13499 corp: 6/350b lim: 85 exec/s: 0 rss: 70Mb L: 75/81 MS: 3 ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:07:11.846 [2024-07-15 16:23:51.240777] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:11.846 [2024-07-15 16:23:51.240805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.846 [2024-07-15 16:23:51.240852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:11.846 [2024-07-15 16:23:51.240869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.846 [2024-07-15 16:23:51.240942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:11.846 [2024-07-15 16:23:51.240957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.846 [2024-07-15 16:23:51.241014] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:11.846 [2024-07-15 16:23:51.241031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.846 #20 NEW cov: 12191 ft: 13580 corp: 7/431b lim: 85 exec/s: 0 rss: 71Mb L: 81/81 MS: 1 ChangeBit- 00:07:11.846 [2024-07-15 16:23:51.290935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:11.846 [2024-07-15 16:23:51.290963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.846 [2024-07-15 16:23:51.291012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:11.846 [2024-07-15 16:23:51.291027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.846 [2024-07-15 16:23:51.291086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:11.846 [2024-07-15 16:23:51.291101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.846 [2024-07-15 16:23:51.291157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:11.846 [2024-07-15 16:23:51.291174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.846 #21 NEW cov: 12191 ft: 13780 corp: 8/512b lim: 85 exec/s: 0 rss: 71Mb L: 81/81 MS: 1 CrossOver- 00:07:11.846 [2024-07-15 16:23:51.330880] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:11.846 [2024-07-15 16:23:51.330907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.846 [2024-07-15 16:23:51.330967] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:11.846 [2024-07-15 16:23:51.330982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.846 [2024-07-15 16:23:51.331066] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:11.846 [2024-07-15 16:23:51.331082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.846 #22 NEW cov: 12191 ft: 13840 corp: 9/569b lim: 85 exec/s: 0 rss: 71Mb L: 57/81 MS: 1 InsertByte- 00:07:11.846 [2024-07-15 16:23:51.370799] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:11.847 [2024-07-15 16:23:51.370825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.847 [2024-07-15 16:23:51.370878] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:11.847 [2024-07-15 16:23:51.370895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.847 #26 NEW cov: 12191 ft: 14261 corp: 10/603b lim: 85 exec/s: 0 rss: 71Mb L: 34/81 MS: 4 CopyPart-CrossOver-ShuffleBytes-InsertRepeatedBytes- 00:07:11.847 [2024-07-15 16:23:51.411084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:11.847 [2024-07-15 16:23:51.411111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.847 [2024-07-15 16:23:51.411156] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:11.847 [2024-07-15 16:23:51.411174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.847 [2024-07-15 16:23:51.411231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:11.847 [2024-07-15 16:23:51.411247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.105 #27 NEW cov: 12191 ft: 14338 corp: 11/660b lim: 85 exec/s: 0 rss: 71Mb L: 57/81 MS: 1 ChangeBinInt- 00:07:12.105 [2024-07-15 16:23:51.461204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.105 [2024-07-15 16:23:51.461231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.105 [2024-07-15 16:23:51.461273] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.105 [2024-07-15 16:23:51.461290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.105 [2024-07-15 16:23:51.461349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:12.105 [2024-07-15 16:23:51.461364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.105 #28 NEW cov: 12191 ft: 14384 corp: 12/716b lim: 85 exec/s: 0 rss: 71Mb L: 56/81 MS: 1 ChangeByte- 00:07:12.105 [2024-07-15 16:23:51.501277] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.105 [2024-07-15 16:23:51.501305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.105 [2024-07-15 16:23:51.501343] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.105 [2024-07-15 16:23:51.501359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.105 [2024-07-15 16:23:51.501418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:12.105 [2024-07-15 16:23:51.501435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.105 #29 NEW cov: 12191 ft: 14433 corp: 13/772b lim: 85 exec/s: 0 rss: 71Mb L: 56/81 MS: 1 ShuffleBytes- 00:07:12.105 [2024-07-15 16:23:51.541590] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.105 [2024-07-15 16:23:51.541617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.105 [2024-07-15 16:23:51.541665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.106 [2024-07-15 16:23:51.541681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.106 [2024-07-15 16:23:51.541739] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:12.106 [2024-07-15 16:23:51.541756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.106 [2024-07-15 16:23:51.541813] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:12.106 [2024-07-15 16:23:51.541829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.106 #30 NEW cov: 12191 ft: 14458 corp: 14/853b lim: 85 exec/s: 0 rss: 71Mb L: 81/81 MS: 1 CopyPart- 00:07:12.106 [2024-07-15 16:23:51.591715] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.106 [2024-07-15 16:23:51.591743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.106 [2024-07-15 16:23:51.591786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.106 [2024-07-15 16:23:51.591801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.106 [2024-07-15 16:23:51.591858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:12.106 [2024-07-15 16:23:51.591889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.106 [2024-07-15 16:23:51.591946] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:12.106 [2024-07-15 16:23:51.591962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.106 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:12.106 #31 NEW cov: 12214 ft: 14474 corp: 15/934b lim: 85 exec/s: 0 rss: 71Mb L: 81/81 MS: 1 ChangeByte- 00:07:12.106 [2024-07-15 16:23:51.631890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.106 [2024-07-15 16:23:51.631917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.106 [2024-07-15 16:23:51.631965] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.106 [2024-07-15 16:23:51.631981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.106 [2024-07-15 16:23:51.632038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:12.106 [2024-07-15 16:23:51.632054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.106 [2024-07-15 16:23:51.632111] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:12.106 [2024-07-15 16:23:51.632127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.106 #32 NEW cov: 12214 ft: 14517 corp: 16/1009b lim: 85 exec/s: 0 rss: 71Mb L: 75/81 MS: 1 ChangeBinInt- 00:07:12.106 [2024-07-15 16:23:51.681846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.106 [2024-07-15 16:23:51.681874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.106 [2024-07-15 16:23:51.681927] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.106 [2024-07-15 16:23:51.681944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.106 [2024-07-15 16:23:51.682001] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:12.106 [2024-07-15 16:23:51.682018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.365 #33 NEW cov: 12214 ft: 14530 corp: 17/1066b lim: 85 exec/s: 0 rss: 71Mb L: 57/81 MS: 1 CMP- DE: "\001\000\002\000"- 00:07:12.365 [2024-07-15 16:23:51.731769] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.365 [2024-07-15 16:23:51.731797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.365 [2024-07-15 16:23:51.731836] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.365 [2024-07-15 16:23:51.731851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.365 #34 NEW cov: 12214 ft: 14551 corp: 18/1100b lim: 85 exec/s: 34 rss: 71Mb L: 34/81 MS: 1 CrossOver- 00:07:12.365 [2024-07-15 16:23:51.782247] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.365 [2024-07-15 16:23:51.782274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.365 [2024-07-15 16:23:51.782324] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.365 [2024-07-15 16:23:51.782340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.365 [2024-07-15 16:23:51.782398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:12.365 [2024-07-15 16:23:51.782413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.365 [2024-07-15 16:23:51.782472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:12.365 [2024-07-15 16:23:51.782488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.365 #40 NEW cov: 12214 ft: 14606 corp: 19/1181b lim: 85 exec/s: 40 rss: 72Mb L: 81/81 MS: 1 ChangeBit- 00:07:12.365 [2024-07-15 16:23:51.832427] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.365 [2024-07-15 16:23:51.832460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.365 [2024-07-15 16:23:51.832528] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.365 [2024-07-15 16:23:51.832545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.365 [2024-07-15 16:23:51.832601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:12.365 [2024-07-15 16:23:51.832616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.365 [2024-07-15 16:23:51.832672] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:12.365 [2024-07-15 16:23:51.832688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.365 #41 NEW cov: 12214 ft: 14692 corp: 20/1256b lim: 85 exec/s: 41 rss: 72Mb L: 75/81 MS: 1 ShuffleBytes- 00:07:12.365 [2024-07-15 16:23:51.872515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.365 [2024-07-15 16:23:51.872543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.365 [2024-07-15 16:23:51.872591] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.365 [2024-07-15 16:23:51.872607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.365 [2024-07-15 16:23:51.872679] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:12.365 [2024-07-15 16:23:51.872696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.365 [2024-07-15 16:23:51.872756] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:12.365 [2024-07-15 16:23:51.872772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.365 #42 NEW cov: 12214 ft: 14701 corp: 21/1338b lim: 85 exec/s: 42 rss: 72Mb L: 82/82 MS: 1 InsertRepeatedBytes- 00:07:12.365 [2024-07-15 16:23:51.912639] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.365 [2024-07-15 16:23:51.912666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.366 [2024-07-15 16:23:51.912708] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.366 [2024-07-15 16:23:51.912725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.366 [2024-07-15 16:23:51.912781] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:12.366 [2024-07-15 16:23:51.912797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.366 [2024-07-15 16:23:51.912853] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:12.366 [2024-07-15 16:23:51.912869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.366 #43 NEW cov: 12214 ft: 14730 corp: 22/1413b lim: 85 exec/s: 43 rss: 72Mb L: 75/82 MS: 1 PersAutoDict- DE: "\001\000\002\000"- 00:07:12.366 [2024-07-15 16:23:51.952733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.366 [2024-07-15 16:23:51.952759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.366 [2024-07-15 16:23:51.952809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.366 [2024-07-15 16:23:51.952824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.366 [2024-07-15 16:23:51.952881] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:12.366 [2024-07-15 16:23:51.952896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.366 [2024-07-15 16:23:51.952954] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:12.366 [2024-07-15 16:23:51.952971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.625 #44 NEW cov: 12214 ft: 14763 corp: 23/1495b lim: 85 exec/s: 44 rss: 72Mb L: 82/82 MS: 1 ChangeByte- 00:07:12.625 [2024-07-15 16:23:52.002757] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.625 [2024-07-15 16:23:52.002784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.625 [2024-07-15 16:23:52.002822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.625 [2024-07-15 16:23:52.002837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.625 [2024-07-15 16:23:52.002897] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:12.625 [2024-07-15 16:23:52.002913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.625 #45 NEW cov: 12214 ft: 14788 corp: 24/1552b lim: 85 exec/s: 45 rss: 72Mb L: 57/82 MS: 1 CrossOver- 00:07:12.625 [2024-07-15 16:23:52.042700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.625 [2024-07-15 16:23:52.042728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.625 [2024-07-15 16:23:52.042768] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.625 [2024-07-15 16:23:52.042782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.625 #46 NEW cov: 12214 ft: 14797 corp: 25/1586b lim: 85 exec/s: 46 rss: 72Mb L: 34/82 MS: 1 ChangeBinInt- 00:07:12.625 [2024-07-15 16:23:52.082623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.625 [2024-07-15 16:23:52.082650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.625 #47 NEW cov: 12214 ft: 15606 corp: 26/1605b lim: 85 exec/s: 47 rss: 72Mb L: 19/82 MS: 1 EraseBytes- 00:07:12.625 [2024-07-15 16:23:52.133285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.625 [2024-07-15 16:23:52.133312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.625 [2024-07-15 16:23:52.133362] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.625 [2024-07-15 16:23:52.133379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.625 [2024-07-15 16:23:52.133454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:12.625 [2024-07-15 16:23:52.133470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.625 [2024-07-15 16:23:52.133531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:12.625 [2024-07-15 16:23:52.133546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.625 #48 NEW cov: 12214 ft: 15614 corp: 27/1686b lim: 85 exec/s: 48 rss: 72Mb L: 81/82 MS: 1 ShuffleBytes- 00:07:12.625 [2024-07-15 16:23:52.173046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.625 [2024-07-15 16:23:52.173073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.626 [2024-07-15 16:23:52.173127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.626 [2024-07-15 16:23:52.173144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.626 #49 NEW cov: 12214 ft: 15618 corp: 28/1721b lim: 85 exec/s: 49 rss: 72Mb L: 35/82 MS: 1 InsertByte- 00:07:12.885 [2024-07-15 16:23:52.223544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.885 [2024-07-15 16:23:52.223572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.885 [2024-07-15 16:23:52.223647] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.885 [2024-07-15 16:23:52.223663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.885 [2024-07-15 16:23:52.223722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:12.885 [2024-07-15 16:23:52.223739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.885 [2024-07-15 16:23:52.223795] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:12.885 [2024-07-15 16:23:52.223811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.885 #50 NEW cov: 12214 ft: 15641 corp: 29/1802b lim: 85 exec/s: 50 rss: 72Mb L: 81/82 MS: 1 ChangeByte- 00:07:12.885 [2024-07-15 16:23:52.263432] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.885 [2024-07-15 16:23:52.263465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.885 [2024-07-15 16:23:52.263513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.885 [2024-07-15 16:23:52.263533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.885 [2024-07-15 16:23:52.263591] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:12.885 [2024-07-15 16:23:52.263605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.885 #51 NEW cov: 12214 ft: 15644 corp: 30/1859b lim: 85 exec/s: 51 rss: 72Mb L: 57/82 MS: 1 ShuffleBytes- 00:07:12.885 [2024-07-15 16:23:52.303725] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.885 [2024-07-15 16:23:52.303753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.885 [2024-07-15 16:23:52.303800] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.885 [2024-07-15 16:23:52.303817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.885 [2024-07-15 16:23:52.303874] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:12.885 [2024-07-15 16:23:52.303889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.885 [2024-07-15 16:23:52.303948] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:12.885 [2024-07-15 16:23:52.303965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.885 #52 NEW cov: 12214 ft: 15660 corp: 31/1940b lim: 85 exec/s: 52 rss: 72Mb L: 81/82 MS: 1 CrossOver- 00:07:12.885 [2024-07-15 16:23:52.353884] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.885 [2024-07-15 16:23:52.353911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.885 [2024-07-15 16:23:52.353960] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.885 [2024-07-15 16:23:52.353975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.885 [2024-07-15 16:23:52.354033] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:12.885 [2024-07-15 16:23:52.354049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.885 [2024-07-15 16:23:52.354106] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:12.885 [2024-07-15 16:23:52.354122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.885 #53 NEW cov: 12214 ft: 15674 corp: 32/2010b lim: 85 exec/s: 53 rss: 72Mb L: 70/82 MS: 1 CrossOver- 00:07:12.885 [2024-07-15 16:23:52.404010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.885 [2024-07-15 16:23:52.404036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.885 [2024-07-15 16:23:52.404104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.885 [2024-07-15 16:23:52.404121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.885 [2024-07-15 16:23:52.404177] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:12.885 [2024-07-15 16:23:52.404193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.885 [2024-07-15 16:23:52.404248] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:12.885 [2024-07-15 16:23:52.404264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.885 #54 NEW cov: 12214 ft: 15722 corp: 33/2089b lim: 85 exec/s: 54 rss: 72Mb L: 79/82 MS: 1 CrossOver- 00:07:12.885 [2024-07-15 16:23:52.444133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.885 [2024-07-15 16:23:52.444158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.885 [2024-07-15 16:23:52.444224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.885 [2024-07-15 16:23:52.444240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.885 [2024-07-15 16:23:52.444296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:12.885 [2024-07-15 16:23:52.444311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.885 [2024-07-15 16:23:52.444369] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:12.885 [2024-07-15 16:23:52.444384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.885 #55 NEW cov: 12214 ft: 15737 corp: 34/2170b lim: 85 exec/s: 55 rss: 72Mb L: 81/82 MS: 1 ShuffleBytes- 00:07:13.145 [2024-07-15 16:23:52.483935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:13.145 [2024-07-15 16:23:52.483962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.145 [2024-07-15 16:23:52.484000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:13.145 [2024-07-15 16:23:52.484015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.145 #56 NEW cov: 12214 ft: 15748 corp: 35/2206b lim: 85 exec/s: 56 rss: 72Mb L: 36/82 MS: 1 CrossOver- 00:07:13.145 [2024-07-15 16:23:52.534389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:13.145 [2024-07-15 16:23:52.534416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.145 [2024-07-15 16:23:52.534469] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:13.145 [2024-07-15 16:23:52.534486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.145 [2024-07-15 16:23:52.534541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:13.145 [2024-07-15 16:23:52.534557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.145 [2024-07-15 16:23:52.534616] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:13.145 [2024-07-15 16:23:52.534632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.145 #57 NEW cov: 12214 ft: 15759 corp: 36/2287b lim: 85 exec/s: 57 rss: 72Mb L: 81/82 MS: 1 ChangeBinInt- 00:07:13.145 [2024-07-15 16:23:52.574330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:13.145 [2024-07-15 16:23:52.574357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.145 [2024-07-15 16:23:52.574405] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:13.145 [2024-07-15 16:23:52.574420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.145 [2024-07-15 16:23:52.574486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:13.145 [2024-07-15 16:23:52.574501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.145 #58 NEW cov: 12214 ft: 15765 corp: 37/2344b lim: 85 exec/s: 58 rss: 72Mb L: 57/82 MS: 1 ChangeBit- 00:07:13.145 [2024-07-15 16:23:52.614667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:13.145 [2024-07-15 16:23:52.614694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.145 [2024-07-15 16:23:52.614740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:13.145 [2024-07-15 16:23:52.614756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.145 [2024-07-15 16:23:52.614813] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:13.145 [2024-07-15 16:23:52.614829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.145 [2024-07-15 16:23:52.614883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:13.145 [2024-07-15 16:23:52.614899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.145 #59 NEW cov: 12214 ft: 15814 corp: 38/2419b lim: 85 exec/s: 59 rss: 72Mb L: 75/82 MS: 1 ChangeByte- 00:07:13.145 [2024-07-15 16:23:52.664761] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:13.145 [2024-07-15 16:23:52.664787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.145 [2024-07-15 16:23:52.664840] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:13.145 [2024-07-15 16:23:52.664856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.145 [2024-07-15 16:23:52.664928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:13.145 [2024-07-15 16:23:52.664944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.145 [2024-07-15 16:23:52.664999] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:13.145 [2024-07-15 16:23:52.665015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.145 #60 NEW cov: 12214 ft: 15821 corp: 39/2494b lim: 85 exec/s: 60 rss: 72Mb L: 75/82 MS: 1 ChangeBinInt- 00:07:13.145 [2024-07-15 16:23:52.704886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:13.145 [2024-07-15 16:23:52.704912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.145 [2024-07-15 16:23:52.704961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:13.145 [2024-07-15 16:23:52.704977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.145 [2024-07-15 16:23:52.705032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:13.145 [2024-07-15 16:23:52.705047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.145 [2024-07-15 16:23:52.705102] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:13.145 [2024-07-15 16:23:52.705121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.405 #61 NEW cov: 12214 ft: 15829 corp: 40/2575b lim: 85 exec/s: 30 rss: 73Mb L: 81/82 MS: 1 ChangeBinInt- 00:07:13.405 #61 DONE cov: 12214 ft: 15829 corp: 40/2575b lim: 85 exec/s: 30 rss: 73Mb 00:07:13.405 ###### Recommended dictionary. ###### 00:07:13.405 "\001\000\002\000" # Uses: 1 00:07:13.405 ###### End of recommended dictionary. ###### 00:07:13.405 Done 61 runs in 2 second(s) 00:07:13.405 16:23:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:07:13.405 16:23:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:13.405 16:23:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:13.405 16:23:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:07:13.405 16:23:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:07:13.405 16:23:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:13.405 16:23:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:13.405 16:23:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:13.405 16:23:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:07:13.405 16:23:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:13.405 16:23:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:13.405 16:23:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:07:13.405 16:23:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:07:13.405 16:23:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:13.405 16:23:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:07:13.405 16:23:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:13.405 16:23:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:13.405 16:23:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:13.405 16:23:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:07:13.405 [2024-07-15 16:23:52.908401] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:07:13.405 [2024-07-15 16:23:52.908499] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2032373 ] 00:07:13.405 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.664 [2024-07-15 16:23:53.090613] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.664 [2024-07-15 16:23:53.158203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.664 [2024-07-15 16:23:53.217659] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:13.664 [2024-07-15 16:23:53.233978] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:07:13.664 INFO: Running with entropic power schedule (0xFF, 100). 00:07:13.664 INFO: Seed: 2690662788 00:07:13.922 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:13.922 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:13.922 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:13.922 INFO: A corpus is not provided, starting from an empty corpus 00:07:13.922 #2 INITED exec/s: 0 rss: 63Mb 00:07:13.922 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:13.922 This may also happen if the target rejected all inputs we tried so far 00:07:13.922 [2024-07-15 16:23:53.299968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:13.922 [2024-07-15 16:23:53.300005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.922 [2024-07-15 16:23:53.300127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:13.922 [2024-07-15 16:23:53.300151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.922 [2024-07-15 16:23:53.300263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:13.922 [2024-07-15 16:23:53.300285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.181 NEW_FUNC[1/695]: 0x4ae840 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:07:14.181 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:14.181 #6 NEW cov: 11901 ft: 11901 corp: 2/18b lim: 25 exec/s: 0 rss: 70Mb L: 17/17 MS: 4 ChangeByte-InsertByte-CrossOver-InsertRepeatedBytes- 00:07:14.181 [2024-07-15 16:23:53.651027] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:14.181 [2024-07-15 16:23:53.651069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.181 [2024-07-15 16:23:53.651212] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:14.181 [2024-07-15 16:23:53.651240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.181 [2024-07-15 16:23:53.651376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:14.181 [2024-07-15 16:23:53.651403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.181 NEW_FUNC[1/1]: 0x1807380 in nvme_qpair_get_state /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1528 00:07:14.181 #12 NEW cov: 12033 ft: 12515 corp: 3/35b lim: 25 exec/s: 0 rss: 70Mb L: 17/17 MS: 1 ChangeBit- 00:07:14.181 [2024-07-15 16:23:53.711232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:14.181 [2024-07-15 16:23:53.711266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.181 [2024-07-15 16:23:53.711379] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:14.181 [2024-07-15 16:23:53.711402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.181 [2024-07-15 16:23:53.711539] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:14.181 [2024-07-15 16:23:53.711567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.181 #13 NEW cov: 12039 ft: 12762 corp: 4/52b lim: 25 exec/s: 0 rss: 70Mb L: 17/17 MS: 1 ChangeByte- 00:07:14.181 [2024-07-15 16:23:53.771410] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:14.181 [2024-07-15 16:23:53.771447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.181 [2024-07-15 16:23:53.771558] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:14.181 [2024-07-15 16:23:53.771578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.181 [2024-07-15 16:23:53.771717] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:14.181 [2024-07-15 16:23:53.771744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.440 #14 NEW cov: 12124 ft: 13151 corp: 5/69b lim: 25 exec/s: 0 rss: 70Mb L: 17/17 MS: 1 ChangeBit- 00:07:14.440 [2024-07-15 16:23:53.821519] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:14.440 [2024-07-15 16:23:53.821554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.440 [2024-07-15 16:23:53.821675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:14.440 [2024-07-15 16:23:53.821698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.440 [2024-07-15 16:23:53.821846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:14.440 [2024-07-15 16:23:53.821871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.440 #15 NEW cov: 12124 ft: 13360 corp: 6/86b lim: 25 exec/s: 0 rss: 70Mb L: 17/17 MS: 1 ChangeBinInt- 00:07:14.440 [2024-07-15 16:23:53.871749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:14.440 [2024-07-15 16:23:53.871785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.440 [2024-07-15 16:23:53.871914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:14.440 [2024-07-15 16:23:53.871939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.440 [2024-07-15 16:23:53.872073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:14.440 [2024-07-15 16:23:53.872095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.440 #16 NEW cov: 12124 ft: 13416 corp: 7/101b lim: 25 exec/s: 0 rss: 71Mb L: 15/17 MS: 1 EraseBytes- 00:07:14.440 [2024-07-15 16:23:53.931504] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:14.440 [2024-07-15 16:23:53.931529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.440 #18 NEW cov: 12124 ft: 13891 corp: 8/106b lim: 25 exec/s: 0 rss: 71Mb L: 5/17 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:14.440 [2024-07-15 16:23:53.981657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:14.440 [2024-07-15 16:23:53.981685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.440 #19 NEW cov: 12124 ft: 13926 corp: 9/111b lim: 25 exec/s: 0 rss: 71Mb L: 5/17 MS: 1 CopyPart- 00:07:14.699 [2024-07-15 16:23:54.042193] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:14.699 [2024-07-15 16:23:54.042229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.699 [2024-07-15 16:23:54.042355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:14.699 [2024-07-15 16:23:54.042378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.699 [2024-07-15 16:23:54.042522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:14.699 [2024-07-15 16:23:54.042549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.699 #20 NEW cov: 12124 ft: 14013 corp: 10/128b lim: 25 exec/s: 0 rss: 71Mb L: 17/17 MS: 1 CMP- DE: "\004\000"- 00:07:14.699 [2024-07-15 16:23:54.102021] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:14.699 [2024-07-15 16:23:54.102046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.699 #21 NEW cov: 12124 ft: 14054 corp: 11/137b lim: 25 exec/s: 0 rss: 71Mb L: 9/17 MS: 1 InsertRepeatedBytes- 00:07:14.699 [2024-07-15 16:23:54.152144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:14.699 [2024-07-15 16:23:54.152174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.699 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:14.699 #22 NEW cov: 12147 ft: 14108 corp: 12/146b lim: 25 exec/s: 0 rss: 71Mb L: 9/17 MS: 1 ShuffleBytes- 00:07:14.699 [2024-07-15 16:23:54.212283] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:14.699 [2024-07-15 16:23:54.212311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.699 #23 NEW cov: 12147 ft: 14129 corp: 13/151b lim: 25 exec/s: 0 rss: 71Mb L: 5/17 MS: 1 ShuffleBytes- 00:07:14.699 [2024-07-15 16:23:54.262492] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:14.699 [2024-07-15 16:23:54.262525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.959 #24 NEW cov: 12147 ft: 14137 corp: 14/156b lim: 25 exec/s: 24 rss: 71Mb L: 5/17 MS: 1 EraseBytes- 00:07:14.959 [2024-07-15 16:23:54.323029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:14.959 [2024-07-15 16:23:54.323063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.959 [2024-07-15 16:23:54.323169] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:14.959 [2024-07-15 16:23:54.323191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.959 [2024-07-15 16:23:54.323332] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:14.959 [2024-07-15 16:23:54.323358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.959 #25 NEW cov: 12147 ft: 14186 corp: 15/173b lim: 25 exec/s: 25 rss: 71Mb L: 17/17 MS: 1 ChangeBit- 00:07:14.959 [2024-07-15 16:23:54.372860] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:14.959 [2024-07-15 16:23:54.372892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.959 #26 NEW cov: 12147 ft: 14212 corp: 16/182b lim: 25 exec/s: 26 rss: 71Mb L: 9/17 MS: 1 CrossOver- 00:07:14.959 [2024-07-15 16:23:54.433084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:14.959 [2024-07-15 16:23:54.433111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.959 #27 NEW cov: 12147 ft: 14232 corp: 17/189b lim: 25 exec/s: 27 rss: 72Mb L: 7/17 MS: 1 CrossOver- 00:07:14.959 [2024-07-15 16:23:54.483572] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:14.959 [2024-07-15 16:23:54.483609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.959 [2024-07-15 16:23:54.483736] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:14.959 [2024-07-15 16:23:54.483764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.959 [2024-07-15 16:23:54.483911] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:14.959 [2024-07-15 16:23:54.483936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.959 #33 NEW cov: 12147 ft: 14252 corp: 18/204b lim: 25 exec/s: 33 rss: 72Mb L: 15/17 MS: 1 CrossOver- 00:07:14.959 [2024-07-15 16:23:54.543387] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:14.959 [2024-07-15 16:23:54.543412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.218 #34 NEW cov: 12147 ft: 14271 corp: 19/210b lim: 25 exec/s: 34 rss: 72Mb L: 6/17 MS: 1 CopyPart- 00:07:15.218 [2024-07-15 16:23:54.593559] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.218 [2024-07-15 16:23:54.593586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.218 #35 NEW cov: 12147 ft: 14283 corp: 20/216b lim: 25 exec/s: 35 rss: 72Mb L: 6/17 MS: 1 CrossOver- 00:07:15.218 [2024-07-15 16:23:54.653720] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.218 [2024-07-15 16:23:54.653746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.218 #36 NEW cov: 12147 ft: 14288 corp: 21/222b lim: 25 exec/s: 36 rss: 72Mb L: 6/17 MS: 1 ChangeByte- 00:07:15.218 [2024-07-15 16:23:54.714139] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.218 [2024-07-15 16:23:54.714167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.218 [2024-07-15 16:23:54.714308] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:15.218 [2024-07-15 16:23:54.714334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.218 #39 NEW cov: 12147 ft: 14533 corp: 22/232b lim: 25 exec/s: 39 rss: 72Mb L: 10/17 MS: 3 InsertByte-CopyPart-InsertRepeatedBytes- 00:07:15.218 [2024-07-15 16:23:54.764229] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.218 [2024-07-15 16:23:54.764256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.218 [2024-07-15 16:23:54.764395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:15.218 [2024-07-15 16:23:54.764419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.218 #40 NEW cov: 12147 ft: 14605 corp: 23/242b lim: 25 exec/s: 40 rss: 72Mb L: 10/17 MS: 1 ChangeBinInt- 00:07:15.478 [2024-07-15 16:23:54.824489] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.478 [2024-07-15 16:23:54.824522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.478 [2024-07-15 16:23:54.824655] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:15.478 [2024-07-15 16:23:54.824682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.478 #41 NEW cov: 12147 ft: 14608 corp: 24/255b lim: 25 exec/s: 41 rss: 72Mb L: 13/17 MS: 1 EraseBytes- 00:07:15.478 [2024-07-15 16:23:54.874744] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.478 [2024-07-15 16:23:54.874777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.478 [2024-07-15 16:23:54.874904] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:15.478 [2024-07-15 16:23:54.874930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.478 [2024-07-15 16:23:54.875073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:15.478 [2024-07-15 16:23:54.875097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.478 #42 NEW cov: 12147 ft: 14626 corp: 25/272b lim: 25 exec/s: 42 rss: 72Mb L: 17/17 MS: 1 ShuffleBytes- 00:07:15.478 [2024-07-15 16:23:54.924624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.478 [2024-07-15 16:23:54.924651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.478 #43 NEW cov: 12147 ft: 14661 corp: 26/280b lim: 25 exec/s: 43 rss: 72Mb L: 8/17 MS: 1 EraseBytes- 00:07:15.478 [2024-07-15 16:23:54.974721] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.478 [2024-07-15 16:23:54.974746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.478 #44 NEW cov: 12147 ft: 14670 corp: 27/289b lim: 25 exec/s: 44 rss: 72Mb L: 9/17 MS: 1 ChangeByte- 00:07:15.478 [2024-07-15 16:23:55.025300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.478 [2024-07-15 16:23:55.025335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.478 [2024-07-15 16:23:55.025449] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:15.478 [2024-07-15 16:23:55.025478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.478 [2024-07-15 16:23:55.025615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:15.478 [2024-07-15 16:23:55.025639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.478 #45 NEW cov: 12147 ft: 14693 corp: 28/306b lim: 25 exec/s: 45 rss: 72Mb L: 17/17 MS: 1 ShuffleBytes- 00:07:15.736 [2024-07-15 16:23:55.085480] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.736 [2024-07-15 16:23:55.085514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.736 [2024-07-15 16:23:55.085615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:15.736 [2024-07-15 16:23:55.085637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.736 [2024-07-15 16:23:55.085768] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:15.736 [2024-07-15 16:23:55.085793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.736 #46 NEW cov: 12147 ft: 14711 corp: 29/321b lim: 25 exec/s: 46 rss: 72Mb L: 15/17 MS: 1 ChangeByte- 00:07:15.736 [2024-07-15 16:23:55.135793] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.736 [2024-07-15 16:23:55.135825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.736 [2024-07-15 16:23:55.135907] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:15.736 [2024-07-15 16:23:55.135932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.736 [2024-07-15 16:23:55.136059] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:15.736 [2024-07-15 16:23:55.136090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.737 [2024-07-15 16:23:55.136234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:15.737 [2024-07-15 16:23:55.136256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.737 #47 NEW cov: 12147 ft: 15145 corp: 30/342b lim: 25 exec/s: 47 rss: 72Mb L: 21/21 MS: 1 CrossOver- 00:07:15.737 [2024-07-15 16:23:55.195407] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.737 [2024-07-15 16:23:55.195440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.737 #48 NEW cov: 12147 ft: 15154 corp: 31/351b lim: 25 exec/s: 48 rss: 72Mb L: 9/21 MS: 1 ShuffleBytes- 00:07:15.737 [2024-07-15 16:23:55.245431] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.737 [2024-07-15 16:23:55.245466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.737 #49 NEW cov: 12147 ft: 15164 corp: 32/357b lim: 25 exec/s: 49 rss: 72Mb L: 6/21 MS: 1 InsertByte- 00:07:15.737 [2024-07-15 16:23:55.295675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.737 [2024-07-15 16:23:55.295700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.737 #50 NEW cov: 12147 ft: 15181 corp: 33/363b lim: 25 exec/s: 25 rss: 72Mb L: 6/21 MS: 1 ChangeBinInt- 00:07:15.737 #50 DONE cov: 12147 ft: 15181 corp: 33/363b lim: 25 exec/s: 25 rss: 72Mb 00:07:15.737 ###### Recommended dictionary. ###### 00:07:15.737 "\004\000" # Uses: 0 00:07:15.737 ###### End of recommended dictionary. ###### 00:07:15.737 Done 50 runs in 2 second(s) 00:07:15.996 16:23:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:07:15.996 16:23:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:15.996 16:23:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:15.996 16:23:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:07:15.996 16:23:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:07:15.996 16:23:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:15.996 16:23:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:15.996 16:23:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:15.996 16:23:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:07:15.996 16:23:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:15.996 16:23:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:15.996 16:23:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:07:15.996 16:23:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:07:15.996 16:23:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:15.996 16:23:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:07:15.996 16:23:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:15.996 16:23:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:15.996 16:23:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:15.996 16:23:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:07:15.996 [2024-07-15 16:23:55.487670] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:07:15.996 [2024-07-15 16:23:55.487742] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2032908 ] 00:07:15.996 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.255 [2024-07-15 16:23:55.668508] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.255 [2024-07-15 16:23:55.733945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.255 [2024-07-15 16:23:55.792746] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.255 [2024-07-15 16:23:55.809000] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:07:16.255 INFO: Running with entropic power schedule (0xFF, 100). 00:07:16.255 INFO: Seed: 969700765 00:07:16.514 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:16.514 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:16.514 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:16.514 INFO: A corpus is not provided, starting from an empty corpus 00:07:16.514 #2 INITED exec/s: 0 rss: 64Mb 00:07:16.514 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:16.514 This may also happen if the target rejected all inputs we tried so far 00:07:16.514 [2024-07-15 16:23:55.878783] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.514 [2024-07-15 16:23:55.878815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.514 [2024-07-15 16:23:55.878939] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.514 [2024-07-15 16:23:55.878962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.772 NEW_FUNC[1/697]: 0x4af920 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:07:16.772 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:16.772 #8 NEW cov: 11975 ft: 11975 corp: 2/42b lim: 100 exec/s: 0 rss: 71Mb L: 41/41 MS: 1 InsertRepeatedBytes- 00:07:16.772 [2024-07-15 16:23:56.209836] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.772 [2024-07-15 16:23:56.209887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.772 [2024-07-15 16:23:56.210024] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.772 [2024-07-15 16:23:56.210053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.772 #9 NEW cov: 12105 ft: 12485 corp: 3/83b lim: 100 exec/s: 0 rss: 71Mb L: 41/41 MS: 1 ChangeBit- 00:07:16.772 [2024-07-15 16:23:56.279960] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:38483074744320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.772 [2024-07-15 16:23:56.279990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.772 [2024-07-15 16:23:56.280119] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.772 [2024-07-15 16:23:56.280139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.772 #10 NEW cov: 12111 ft: 12837 corp: 4/124b lim: 100 exec/s: 0 rss: 71Mb L: 41/41 MS: 1 ChangeByte- 00:07:16.772 [2024-07-15 16:23:56.330118] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:990511104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.773 [2024-07-15 16:23:56.330146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.773 [2024-07-15 16:23:56.330270] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.773 [2024-07-15 16:23:56.330291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.773 #11 NEW cov: 12196 ft: 13155 corp: 5/166b lim: 100 exec/s: 0 rss: 71Mb L: 42/42 MS: 1 InsertByte- 00:07:17.031 [2024-07-15 16:23:56.390218] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.031 [2024-07-15 16:23:56.390253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.031 [2024-07-15 16:23:56.390379] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.031 [2024-07-15 16:23:56.390405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.031 #12 NEW cov: 12196 ft: 13280 corp: 6/209b lim: 100 exec/s: 0 rss: 71Mb L: 43/43 MS: 1 InsertRepeatedBytes- 00:07:17.031 [2024-07-15 16:23:56.440370] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:29686981722112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.031 [2024-07-15 16:23:56.440398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.031 [2024-07-15 16:23:56.440534] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.031 [2024-07-15 16:23:56.440560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.031 #13 NEW cov: 12196 ft: 13392 corp: 7/250b lim: 100 exec/s: 0 rss: 71Mb L: 41/43 MS: 1 ChangeBinInt- 00:07:17.031 [2024-07-15 16:23:56.500567] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:990511104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.031 [2024-07-15 16:23:56.500602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.031 [2024-07-15 16:23:56.500725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.031 [2024-07-15 16:23:56.500751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.031 #14 NEW cov: 12196 ft: 13499 corp: 8/300b lim: 100 exec/s: 0 rss: 72Mb L: 50/50 MS: 1 CrossOver- 00:07:17.031 [2024-07-15 16:23:56.561314] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.031 [2024-07-15 16:23:56.561348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.031 [2024-07-15 16:23:56.561447] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12225489209634957737 len:43434 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.031 [2024-07-15 16:23:56.561470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.031 [2024-07-15 16:23:56.561595] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:12225489209634957737 len:43434 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.031 [2024-07-15 16:23:56.561621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.031 [2024-07-15 16:23:56.561752] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:2846425088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.031 [2024-07-15 16:23:56.561779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:17.031 #15 NEW cov: 12196 ft: 14053 corp: 9/389b lim: 100 exec/s: 0 rss: 72Mb L: 89/89 MS: 1 InsertRepeatedBytes- 00:07:17.031 [2024-07-15 16:23:56.611502] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.031 [2024-07-15 16:23:56.611536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.031 [2024-07-15 16:23:56.611623] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.031 [2024-07-15 16:23:56.611648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.031 [2024-07-15 16:23:56.611776] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.031 [2024-07-15 16:23:56.611799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.031 [2024-07-15 16:23:56.611930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.031 [2024-07-15 16:23:56.611951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:17.288 #16 NEW cov: 12196 ft: 14140 corp: 10/487b lim: 100 exec/s: 0 rss: 72Mb L: 98/98 MS: 1 InsertRepeatedBytes- 00:07:17.288 [2024-07-15 16:23:56.671671] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.288 [2024-07-15 16:23:56.671706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.288 [2024-07-15 16:23:56.671820] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:31869 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.288 [2024-07-15 16:23:56.671844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.289 [2024-07-15 16:23:56.671979] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8970181431921507452 len:31869 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.289 [2024-07-15 16:23:56.672003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.289 [2024-07-15 16:23:56.672142] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:8970181431921507452 len:31869 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.289 [2024-07-15 16:23:56.672165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:17.289 #17 NEW cov: 12196 ft: 14224 corp: 11/582b lim: 100 exec/s: 0 rss: 72Mb L: 95/98 MS: 1 InsertRepeatedBytes- 00:07:17.289 [2024-07-15 16:23:56.721236] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:29686981722112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.289 [2024-07-15 16:23:56.721267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.289 [2024-07-15 16:23:56.721398] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:25958 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.289 [2024-07-15 16:23:56.721425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.289 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:17.289 #18 NEW cov: 12219 ft: 14324 corp: 12/634b lim: 100 exec/s: 0 rss: 72Mb L: 52/98 MS: 1 InsertRepeatedBytes- 00:07:17.289 [2024-07-15 16:23:56.781364] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:990511104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.289 [2024-07-15 16:23:56.781390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.289 [2024-07-15 16:23:56.781519] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.289 [2024-07-15 16:23:56.781541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.289 #19 NEW cov: 12219 ft: 14351 corp: 13/684b lim: 100 exec/s: 0 rss: 72Mb L: 50/98 MS: 1 ChangeBinInt- 00:07:17.289 [2024-07-15 16:23:56.841559] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:990511104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.289 [2024-07-15 16:23:56.841587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.289 [2024-07-15 16:23:56.841724] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.289 [2024-07-15 16:23:56.841749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.289 #20 NEW cov: 12219 ft: 14412 corp: 14/724b lim: 100 exec/s: 20 rss: 72Mb L: 40/98 MS: 1 EraseBytes- 00:07:17.547 [2024-07-15 16:23:56.891722] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.547 [2024-07-15 16:23:56.891748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.547 [2024-07-15 16:23:56.891880] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.547 [2024-07-15 16:23:56.891906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.547 #21 NEW cov: 12219 ft: 14415 corp: 15/770b lim: 100 exec/s: 21 rss: 72Mb L: 46/98 MS: 1 InsertRepeatedBytes- 00:07:17.547 [2024-07-15 16:23:56.941902] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:151314366464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.547 [2024-07-15 16:23:56.941936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.547 [2024-07-15 16:23:56.942072] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.547 [2024-07-15 16:23:56.942100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.547 #22 NEW cov: 12219 ft: 14432 corp: 16/820b lim: 100 exec/s: 22 rss: 72Mb L: 50/98 MS: 1 ChangeByte- 00:07:17.547 [2024-07-15 16:23:56.992551] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.547 [2024-07-15 16:23:56.992584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.547 [2024-07-15 16:23:56.992679] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.547 [2024-07-15 16:23:56.992704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.547 [2024-07-15 16:23:56.992837] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.547 [2024-07-15 16:23:56.992861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.547 [2024-07-15 16:23:56.992994] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.547 [2024-07-15 16:23:56.993018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:17.547 #23 NEW cov: 12219 ft: 14450 corp: 17/918b lim: 100 exec/s: 23 rss: 72Mb L: 98/98 MS: 1 CrossOver- 00:07:17.547 [2024-07-15 16:23:57.051897] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.547 [2024-07-15 16:23:57.051923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.547 #24 NEW cov: 12219 ft: 15273 corp: 18/946b lim: 100 exec/s: 24 rss: 72Mb L: 28/98 MS: 1 EraseBytes- 00:07:17.547 [2024-07-15 16:23:57.102309] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:38482940526592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.547 [2024-07-15 16:23:57.102340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.547 [2024-07-15 16:23:57.102466] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.547 [2024-07-15 16:23:57.102500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.547 #25 NEW cov: 12219 ft: 15325 corp: 19/987b lim: 100 exec/s: 25 rss: 72Mb L: 41/98 MS: 1 ChangeBinInt- 00:07:17.805 [2024-07-15 16:23:57.152511] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.805 [2024-07-15 16:23:57.152543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.805 [2024-07-15 16:23:57.152668] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:49478023249920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.805 [2024-07-15 16:23:57.152688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.805 #26 NEW cov: 12219 ft: 15337 corp: 20/1031b lim: 100 exec/s: 26 rss: 72Mb L: 44/98 MS: 1 InsertByte- 00:07:17.805 [2024-07-15 16:23:57.203267] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.805 [2024-07-15 16:23:57.203301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.805 [2024-07-15 16:23:57.203386] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.805 [2024-07-15 16:23:57.203410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.805 [2024-07-15 16:23:57.203543] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.805 [2024-07-15 16:23:57.203569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.805 [2024-07-15 16:23:57.203700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.805 [2024-07-15 16:23:57.203725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:17.805 #27 NEW cov: 12219 ft: 15343 corp: 21/1129b lim: 100 exec/s: 27 rss: 72Mb L: 98/98 MS: 1 ChangeBit- 00:07:17.805 [2024-07-15 16:23:57.252862] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.805 [2024-07-15 16:23:57.252897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.805 [2024-07-15 16:23:57.253031] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:49478033211392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.805 [2024-07-15 16:23:57.253056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.805 #28 NEW cov: 12219 ft: 15354 corp: 22/1173b lim: 100 exec/s: 28 rss: 72Mb L: 44/98 MS: 1 ChangeByte- 00:07:17.805 [2024-07-15 16:23:57.313003] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.805 [2024-07-15 16:23:57.313038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.805 [2024-07-15 16:23:57.313176] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.805 [2024-07-15 16:23:57.313202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.805 #29 NEW cov: 12219 ft: 15392 corp: 23/1219b lim: 100 exec/s: 29 rss: 73Mb L: 46/98 MS: 1 ChangeBinInt- 00:07:17.805 [2024-07-15 16:23:57.373192] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.805 [2024-07-15 16:23:57.373231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.805 [2024-07-15 16:23:57.373364] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:49478023249927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.805 [2024-07-15 16:23:57.373386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.805 #30 NEW cov: 12219 ft: 15415 corp: 24/1263b lim: 100 exec/s: 30 rss: 73Mb L: 44/98 MS: 1 ChangeBinInt- 00:07:18.062 [2024-07-15 16:23:57.424037] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.062 [2024-07-15 16:23:57.424074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.062 [2024-07-15 16:23:57.424195] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:31869 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.062 [2024-07-15 16:23:57.424219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.062 [2024-07-15 16:23:57.424350] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8970181431921507452 len:31869 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.062 [2024-07-15 16:23:57.424372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.062 [2024-07-15 16:23:57.424514] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:8970181429832974336 len:31869 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.062 [2024-07-15 16:23:57.424541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.062 #31 NEW cov: 12219 ft: 15423 corp: 25/1358b lim: 100 exec/s: 31 rss: 73Mb L: 95/98 MS: 1 CopyPart- 00:07:18.062 [2024-07-15 16:23:57.484051] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.062 [2024-07-15 16:23:57.484083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.062 [2024-07-15 16:23:57.484179] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.062 [2024-07-15 16:23:57.484202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.062 [2024-07-15 16:23:57.484332] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.062 [2024-07-15 16:23:57.484355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.062 [2024-07-15 16:23:57.484488] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.062 [2024-07-15 16:23:57.484510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.062 #32 NEW cov: 12219 ft: 15455 corp: 26/1446b lim: 100 exec/s: 32 rss: 73Mb L: 88/98 MS: 1 InsertRepeatedBytes- 00:07:18.062 [2024-07-15 16:23:57.534262] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.062 [2024-07-15 16:23:57.534296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.062 [2024-07-15 16:23:57.534398] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:31869 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.062 [2024-07-15 16:23:57.534420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.062 [2024-07-15 16:23:57.534546] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8970181431921507452 len:31869 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.062 [2024-07-15 16:23:57.534571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.062 [2024-07-15 16:23:57.534703] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:8970181429832974336 len:31869 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.062 [2024-07-15 16:23:57.534726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.062 #33 NEW cov: 12219 ft: 15473 corp: 27/1541b lim: 100 exec/s: 33 rss: 73Mb L: 95/98 MS: 1 ChangeBinInt- 00:07:18.062 [2024-07-15 16:23:57.594419] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.062 [2024-07-15 16:23:57.594456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.062 [2024-07-15 16:23:57.594550] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:31853 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.062 [2024-07-15 16:23:57.594582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.062 [2024-07-15 16:23:57.594709] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8970181431921507452 len:31869 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.062 [2024-07-15 16:23:57.594732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.062 [2024-07-15 16:23:57.594860] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:8970181431921507452 len:31869 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.062 [2024-07-15 16:23:57.594883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.062 #34 NEW cov: 12219 ft: 15497 corp: 28/1636b lim: 100 exec/s: 34 rss: 73Mb L: 95/98 MS: 1 ChangeBit- 00:07:18.062 [2024-07-15 16:23:57.644068] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.062 [2024-07-15 16:23:57.644105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.062 [2024-07-15 16:23:57.644229] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.062 [2024-07-15 16:23:57.644253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.321 #35 NEW cov: 12219 ft: 15508 corp: 29/1677b lim: 100 exec/s: 35 rss: 73Mb L: 41/98 MS: 1 ShuffleBytes- 00:07:18.321 [2024-07-15 16:23:57.694264] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.321 [2024-07-15 16:23:57.694299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.321 [2024-07-15 16:23:57.694433] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:49478033211392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.321 [2024-07-15 16:23:57.694459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.321 #36 NEW cov: 12219 ft: 15512 corp: 30/1721b lim: 100 exec/s: 36 rss: 73Mb L: 44/98 MS: 1 ChangeBit- 00:07:18.321 [2024-07-15 16:23:57.754468] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.321 [2024-07-15 16:23:57.754499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.321 [2024-07-15 16:23:57.754624] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:49478023249927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.321 [2024-07-15 16:23:57.754649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.321 #37 NEW cov: 12219 ft: 15539 corp: 31/1765b lim: 100 exec/s: 37 rss: 73Mb L: 44/98 MS: 1 ChangeBinInt- 00:07:18.321 [2024-07-15 16:23:57.815515] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.321 [2024-07-15 16:23:57.815552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.321 [2024-07-15 16:23:57.815644] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.321 [2024-07-15 16:23:57.815671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.321 [2024-07-15 16:23:57.815800] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.321 [2024-07-15 16:23:57.815824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.321 [2024-07-15 16:23:57.815953] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.321 [2024-07-15 16:23:57.815975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.321 [2024-07-15 16:23:57.816098] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.321 [2024-07-15 16:23:57.816125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:18.321 #38 NEW cov: 12219 ft: 15587 corp: 32/1865b lim: 100 exec/s: 19 rss: 73Mb L: 100/100 MS: 1 CopyPart- 00:07:18.321 #38 DONE cov: 12219 ft: 15587 corp: 32/1865b lim: 100 exec/s: 19 rss: 73Mb 00:07:18.321 Done 38 runs in 2 second(s) 00:07:18.580 16:23:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:07:18.580 16:23:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:18.580 16:23:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:18.580 16:23:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:07:18.580 00:07:18.580 real 1m4.176s 00:07:18.580 user 1m40.715s 00:07:18.580 sys 0m6.987s 00:07:18.580 16:23:57 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.580 16:23:57 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:18.580 ************************************ 00:07:18.580 END TEST nvmf_llvm_fuzz 00:07:18.580 ************************************ 00:07:18.580 16:23:58 llvm_fuzz -- common/autotest_common.sh@1142 -- # return 0 00:07:18.580 16:23:58 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:18.580 16:23:58 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:18.580 16:23:58 llvm_fuzz -- fuzz/llvm.sh@63 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:18.580 16:23:58 llvm_fuzz -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.580 16:23:58 llvm_fuzz -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.580 16:23:58 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:18.580 ************************************ 00:07:18.580 START TEST vfio_llvm_fuzz 00:07:18.580 ************************************ 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:18.580 * Looking for test storage... 00:07:18.580 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:18.580 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:18.581 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:18.842 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:18.842 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:18.842 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:18.842 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:18.842 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:18.843 #define SPDK_CONFIG_H 00:07:18.843 #define SPDK_CONFIG_APPS 1 00:07:18.843 #define SPDK_CONFIG_ARCH native 00:07:18.843 #undef SPDK_CONFIG_ASAN 00:07:18.843 #undef SPDK_CONFIG_AVAHI 00:07:18.843 #undef SPDK_CONFIG_CET 00:07:18.843 #define SPDK_CONFIG_COVERAGE 1 00:07:18.843 #define SPDK_CONFIG_CROSS_PREFIX 00:07:18.843 #undef SPDK_CONFIG_CRYPTO 00:07:18.843 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:18.843 #undef SPDK_CONFIG_CUSTOMOCF 00:07:18.843 #undef SPDK_CONFIG_DAOS 00:07:18.843 #define SPDK_CONFIG_DAOS_DIR 00:07:18.843 #define SPDK_CONFIG_DEBUG 1 00:07:18.843 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:18.843 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:18.843 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:18.843 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:18.843 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:18.843 #undef SPDK_CONFIG_DPDK_UADK 00:07:18.843 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:18.843 #define SPDK_CONFIG_EXAMPLES 1 00:07:18.843 #undef SPDK_CONFIG_FC 00:07:18.843 #define SPDK_CONFIG_FC_PATH 00:07:18.843 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:18.843 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:18.843 #undef SPDK_CONFIG_FUSE 00:07:18.843 #define SPDK_CONFIG_FUZZER 1 00:07:18.843 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:18.843 #undef SPDK_CONFIG_GOLANG 00:07:18.843 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:18.843 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:18.843 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:18.843 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:18.843 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:18.843 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:18.843 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:18.843 #define SPDK_CONFIG_IDXD 1 00:07:18.843 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:18.843 #undef SPDK_CONFIG_IPSEC_MB 00:07:18.843 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:18.843 #define SPDK_CONFIG_ISAL 1 00:07:18.843 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:18.843 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:18.843 #define SPDK_CONFIG_LIBDIR 00:07:18.843 #undef SPDK_CONFIG_LTO 00:07:18.843 #define SPDK_CONFIG_MAX_LCORES 128 00:07:18.843 #define SPDK_CONFIG_NVME_CUSE 1 00:07:18.843 #undef SPDK_CONFIG_OCF 00:07:18.843 #define SPDK_CONFIG_OCF_PATH 00:07:18.843 #define SPDK_CONFIG_OPENSSL_PATH 00:07:18.843 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:18.843 #define SPDK_CONFIG_PGO_DIR 00:07:18.843 #undef SPDK_CONFIG_PGO_USE 00:07:18.843 #define SPDK_CONFIG_PREFIX /usr/local 00:07:18.843 #undef SPDK_CONFIG_RAID5F 00:07:18.843 #undef SPDK_CONFIG_RBD 00:07:18.843 #define SPDK_CONFIG_RDMA 1 00:07:18.843 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:18.843 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:18.843 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:18.843 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:18.843 #undef SPDK_CONFIG_SHARED 00:07:18.843 #undef SPDK_CONFIG_SMA 00:07:18.843 #define SPDK_CONFIG_TESTS 1 00:07:18.843 #undef SPDK_CONFIG_TSAN 00:07:18.843 #define SPDK_CONFIG_UBLK 1 00:07:18.843 #define SPDK_CONFIG_UBSAN 1 00:07:18.843 #undef SPDK_CONFIG_UNIT_TESTS 00:07:18.843 #undef SPDK_CONFIG_URING 00:07:18.843 #define SPDK_CONFIG_URING_PATH 00:07:18.843 #undef SPDK_CONFIG_URING_ZNS 00:07:18.843 #undef SPDK_CONFIG_USDT 00:07:18.843 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:18.843 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:18.843 #define SPDK_CONFIG_VFIO_USER 1 00:07:18.843 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:18.843 #define SPDK_CONFIG_VHOST 1 00:07:18.843 #define SPDK_CONFIG_VIRTIO 1 00:07:18.843 #undef SPDK_CONFIG_VTUNE 00:07:18.843 #define SPDK_CONFIG_VTUNE_DIR 00:07:18.843 #define SPDK_CONFIG_WERROR 1 00:07:18.843 #define SPDK_CONFIG_WPDK_DIR 00:07:18.843 #undef SPDK_CONFIG_XNVME 00:07:18.843 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:18.843 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 1 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : true 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # : 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:18.844 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # cat 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@318 -- # [[ -z 2033456 ]] 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@318 -- # kill -0 2033456 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.91OJpQ 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.91OJpQ/tests/vfio /tmp/spdk.91OJpQ 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@327 -- # df -T 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=954408960 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4330020864 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=54028075008 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742317568 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=7714242560 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30866448384 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871158784 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=12342484992 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348465152 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=5980160 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30870278144 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871158784 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=880640 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=6174224384 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174228480 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:18.845 * Looking for test storage... 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # target_space=54028075008 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:18.845 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@381 -- # new_size=9928835072 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:18.846 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # return 0 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1687 -- # true 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:07:18.846 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:18.846 16:23:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:07:18.846 [2024-07-15 16:23:58.370865] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:07:18.846 [2024-07-15 16:23:58.370937] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2033506 ] 00:07:18.846 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.105 [2024-07-15 16:23:58.447085] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.105 [2024-07-15 16:23:58.532499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.364 INFO: Running with entropic power schedule (0xFF, 100). 00:07:19.364 INFO: Seed: 3866668901 00:07:19.364 INFO: Loaded 1 modules (355049 inline 8-bit counters): 355049 [0x296c90c, 0x29c33f5), 00:07:19.364 INFO: Loaded 1 PC tables (355049 PCs): 355049 [0x29c33f8,0x2f2e288), 00:07:19.364 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:19.364 INFO: A corpus is not provided, starting from an empty corpus 00:07:19.364 #2 INITED exec/s: 0 rss: 65Mb 00:07:19.364 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:19.364 This may also happen if the target rejected all inputs we tried so far 00:07:19.364 [2024-07-15 16:23:58.778649] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:07:19.623 NEW_FUNC[1/657]: 0x4838a0 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:07:19.623 NEW_FUNC[2/657]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:19.623 #21 NEW cov: 10960 ft: 10539 corp: 2/7b lim: 6 exec/s: 0 rss: 73Mb L: 6/6 MS: 4 CopyPart-ChangeByte-InsertByte-InsertRepeatedBytes- 00:07:19.881 NEW_FUNC[1/1]: 0x1d5c7a0 in spdk_thread_is_exited /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:736 00:07:19.881 #25 NEW cov: 10977 ft: 13697 corp: 3/13b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 4 CrossOver-ChangeByte-ChangeBinInt-CrossOver- 00:07:19.881 #26 NEW cov: 10977 ft: 14955 corp: 4/19b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 1 ChangeASCIIInt- 00:07:20.140 NEW_FUNC[1/1]: 0x1a4a600 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:20.140 #27 NEW cov: 10994 ft: 16381 corp: 5/25b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 1 CopyPart- 00:07:20.140 #28 NEW cov: 10994 ft: 16619 corp: 6/31b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 1 ShuffleBytes- 00:07:20.398 #29 NEW cov: 10994 ft: 17176 corp: 7/37b lim: 6 exec/s: 29 rss: 75Mb L: 6/6 MS: 1 ChangeBit- 00:07:20.398 #35 NEW cov: 10994 ft: 17261 corp: 8/43b lim: 6 exec/s: 35 rss: 75Mb L: 6/6 MS: 1 ChangeBinInt- 00:07:20.658 #36 NEW cov: 10994 ft: 17700 corp: 9/49b lim: 6 exec/s: 36 rss: 76Mb L: 6/6 MS: 1 CrossOver- 00:07:20.658 #37 NEW cov: 10994 ft: 17750 corp: 10/55b lim: 6 exec/s: 37 rss: 76Mb L: 6/6 MS: 1 ChangeBit- 00:07:20.921 #38 NEW cov: 10994 ft: 17797 corp: 11/61b lim: 6 exec/s: 38 rss: 76Mb L: 6/6 MS: 1 CrossOver- 00:07:20.921 #39 NEW cov: 10994 ft: 17874 corp: 12/67b lim: 6 exec/s: 39 rss: 76Mb L: 6/6 MS: 1 ChangeByte- 00:07:20.921 #40 NEW cov: 10994 ft: 18014 corp: 13/73b lim: 6 exec/s: 40 rss: 76Mb L: 6/6 MS: 1 ChangeBit- 00:07:21.181 #41 NEW cov: 11001 ft: 18219 corp: 14/79b lim: 6 exec/s: 41 rss: 76Mb L: 6/6 MS: 1 ChangeBinInt- 00:07:21.181 #44 NEW cov: 11002 ft: 18277 corp: 15/85b lim: 6 exec/s: 22 rss: 76Mb L: 6/6 MS: 3 EraseBytes-CrossOver-CrossOver- 00:07:21.181 #44 DONE cov: 11002 ft: 18277 corp: 15/85b lim: 6 exec/s: 22 rss: 76Mb 00:07:21.181 Done 44 runs in 2 second(s) 00:07:21.181 [2024-07-15 16:24:00.764628] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:07:21.441 16:24:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:07:21.441 16:24:01 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:21.441 16:24:01 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:21.441 16:24:01 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:21.441 16:24:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:07:21.441 16:24:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:21.441 16:24:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:21.441 16:24:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:21.441 16:24:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:07:21.441 16:24:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:07:21.441 16:24:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:07:21.441 16:24:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:07:21.441 16:24:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:21.441 16:24:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:21.441 16:24:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:21.441 16:24:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:07:21.441 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:21.441 16:24:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:21.441 16:24:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:21.442 16:24:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:07:21.701 [2024-07-15 16:24:01.051823] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:07:21.701 [2024-07-15 16:24:01.051894] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2034014 ] 00:07:21.701 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.701 [2024-07-15 16:24:01.125927] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.701 [2024-07-15 16:24:01.199244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.960 INFO: Running with entropic power schedule (0xFF, 100). 00:07:21.960 INFO: Seed: 2230722719 00:07:21.960 INFO: Loaded 1 modules (355049 inline 8-bit counters): 355049 [0x296c90c, 0x29c33f5), 00:07:21.960 INFO: Loaded 1 PC tables (355049 PCs): 355049 [0x29c33f8,0x2f2e288), 00:07:21.960 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:21.960 INFO: A corpus is not provided, starting from an empty corpus 00:07:21.960 #2 INITED exec/s: 0 rss: 64Mb 00:07:21.960 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:21.960 This may also happen if the target rejected all inputs we tried so far 00:07:21.960 [2024-07-15 16:24:01.433576] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:07:21.960 [2024-07-15 16:24:01.485484] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:21.960 [2024-07-15 16:24:01.485510] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:21.960 [2024-07-15 16:24:01.485538] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:22.478 NEW_FUNC[1/657]: 0x483e40 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:07:22.478 NEW_FUNC[2/657]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:22.478 #41 NEW cov: 10889 ft: 10922 corp: 2/5b lim: 4 exec/s: 0 rss: 71Mb L: 4/4 MS: 4 InsertByte-InsertByte-ChangeBit-CopyPart- 00:07:22.478 [2024-07-15 16:24:01.950151] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:22.478 [2024-07-15 16:24:01.950189] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:22.478 [2024-07-15 16:24:01.950208] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:22.478 NEW_FUNC[1/3]: 0x171acc0 in nvme_pcie_qpair_submit_tracker /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_pcie_common.c:623 00:07:22.478 NEW_FUNC[2/3]: 0x171df00 in nvme_pcie_copy_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_pcie_common.c:606 00:07:22.478 #42 NEW cov: 10973 ft: 14247 corp: 3/9b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 CMP- DE: "\377\377\377\000"- 00:07:22.738 [2024-07-15 16:24:02.137867] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:22.738 [2024-07-15 16:24:02.137890] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:22.738 [2024-07-15 16:24:02.137907] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:22.738 NEW_FUNC[1/1]: 0x1a4a600 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:22.738 #46 NEW cov: 10990 ft: 15139 corp: 4/13b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 4 ChangeBinInt-ChangeByte-CrossOver-InsertByte- 00:07:22.738 [2024-07-15 16:24:02.326544] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:22.738 [2024-07-15 16:24:02.326566] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:22.738 [2024-07-15 16:24:02.326583] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:22.996 #47 NEW cov: 10990 ft: 15338 corp: 5/17b lim: 4 exec/s: 47 rss: 74Mb L: 4/4 MS: 1 ChangeBinInt- 00:07:22.996 [2024-07-15 16:24:02.506368] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:22.996 [2024-07-15 16:24:02.506390] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:22.996 [2024-07-15 16:24:02.506407] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:23.255 #48 NEW cov: 10990 ft: 16100 corp: 6/21b lim: 4 exec/s: 48 rss: 74Mb L: 4/4 MS: 1 ChangeByte- 00:07:23.255 [2024-07-15 16:24:02.687178] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:23.255 [2024-07-15 16:24:02.687200] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:23.255 [2024-07-15 16:24:02.687218] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:23.255 #49 NEW cov: 10990 ft: 16981 corp: 7/25b lim: 4 exec/s: 49 rss: 74Mb L: 4/4 MS: 1 PersAutoDict- DE: "\377\377\377\000"- 00:07:23.514 [2024-07-15 16:24:02.870035] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:23.514 [2024-07-15 16:24:02.870059] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:23.514 [2024-07-15 16:24:02.870076] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:23.514 #50 NEW cov: 10990 ft: 17133 corp: 8/29b lim: 4 exec/s: 50 rss: 74Mb L: 4/4 MS: 1 PersAutoDict- DE: "\377\377\377\000"- 00:07:23.514 [2024-07-15 16:24:03.053304] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:23.514 [2024-07-15 16:24:03.053331] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:23.514 [2024-07-15 16:24:03.053348] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:23.773 #51 NEW cov: 10990 ft: 17450 corp: 9/33b lim: 4 exec/s: 51 rss: 74Mb L: 4/4 MS: 1 ChangeByte- 00:07:23.773 [2024-07-15 16:24:03.237293] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:23.773 [2024-07-15 16:24:03.237315] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:23.773 [2024-07-15 16:24:03.237332] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:23.773 #52 NEW cov: 10997 ft: 17683 corp: 10/37b lim: 4 exec/s: 52 rss: 74Mb L: 4/4 MS: 1 ChangeBit- 00:07:24.032 [2024-07-15 16:24:03.421055] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:24.032 [2024-07-15 16:24:03.421076] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:24.032 [2024-07-15 16:24:03.421093] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:24.032 #53 NEW cov: 10997 ft: 17797 corp: 11/41b lim: 4 exec/s: 26 rss: 75Mb L: 4/4 MS: 1 ChangeByte- 00:07:24.032 #53 DONE cov: 10997 ft: 17797 corp: 11/41b lim: 4 exec/s: 26 rss: 75Mb 00:07:24.032 ###### Recommended dictionary. ###### 00:07:24.032 "\377\377\377\000" # Uses: 2 00:07:24.032 ###### End of recommended dictionary. ###### 00:07:24.032 Done 53 runs in 2 second(s) 00:07:24.032 [2024-07-15 16:24:03.555631] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:07:24.291 16:24:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:07:24.291 16:24:03 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:24.291 16:24:03 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:24.291 16:24:03 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:24.291 16:24:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:07:24.291 16:24:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:24.291 16:24:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:24.291 16:24:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:24.291 16:24:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:07:24.291 16:24:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:07:24.291 16:24:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:07:24.291 16:24:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:07:24.291 16:24:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:24.291 16:24:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:24.291 16:24:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:24.291 16:24:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:07:24.291 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:24.291 16:24:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:24.291 16:24:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:24.291 16:24:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:07:24.291 [2024-07-15 16:24:03.846457] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:07:24.291 [2024-07-15 16:24:03.846528] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2034478 ] 00:07:24.291 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.551 [2024-07-15 16:24:03.922229] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.551 [2024-07-15 16:24:03.997307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.810 INFO: Running with entropic power schedule (0xFF, 100). 00:07:24.810 INFO: Seed: 732750777 00:07:24.810 INFO: Loaded 1 modules (355049 inline 8-bit counters): 355049 [0x296c90c, 0x29c33f5), 00:07:24.810 INFO: Loaded 1 PC tables (355049 PCs): 355049 [0x29c33f8,0x2f2e288), 00:07:24.810 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:24.810 INFO: A corpus is not provided, starting from an empty corpus 00:07:24.810 #2 INITED exec/s: 0 rss: 64Mb 00:07:24.810 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:24.810 This may also happen if the target rejected all inputs we tried so far 00:07:24.810 [2024-07-15 16:24:04.230434] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:07:24.810 [2024-07-15 16:24:04.287209] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:25.328 NEW_FUNC[1/658]: 0x484820 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:07:25.328 NEW_FUNC[2/658]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:25.328 #5 NEW cov: 10929 ft: 10914 corp: 2/9b lim: 8 exec/s: 0 rss: 72Mb L: 8/8 MS: 3 ShuffleBytes-InsertRepeatedBytes-InsertByte- 00:07:25.328 [2024-07-15 16:24:04.746119] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:25.328 NEW_FUNC[1/1]: 0x11a5cd0 in nvmf_ctrlr_ns_is_visible /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/./nvmf_internal.h:486 00:07:25.328 #6 NEW cov: 10956 ft: 14609 corp: 3/17b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 CopyPart- 00:07:25.587 [2024-07-15 16:24:04.948197] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:25.587 NEW_FUNC[1/1]: 0x1a4a600 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:25.587 #12 NEW cov: 10973 ft: 15278 corp: 4/25b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 1 ChangeBit- 00:07:25.587 [2024-07-15 16:24:05.133356] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:25.845 #13 NEW cov: 10973 ft: 15714 corp: 5/33b lim: 8 exec/s: 13 rss: 74Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:25.845 [2024-07-15 16:24:05.317186] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:25.845 #14 NEW cov: 10973 ft: 16703 corp: 6/41b lim: 8 exec/s: 14 rss: 74Mb L: 8/8 MS: 1 ChangeByte- 00:07:26.103 [2024-07-15 16:24:05.502981] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:26.103 #20 NEW cov: 10973 ft: 16902 corp: 7/49b lim: 8 exec/s: 20 rss: 74Mb L: 8/8 MS: 1 CopyPart- 00:07:26.103 [2024-07-15 16:24:05.685959] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:26.362 #24 NEW cov: 10973 ft: 17246 corp: 8/57b lim: 8 exec/s: 24 rss: 74Mb L: 8/8 MS: 4 CrossOver-ChangeBinInt-InsertRepeatedBytes-CopyPart- 00:07:26.362 [2024-07-15 16:24:05.875253] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:26.630 #25 NEW cov: 10973 ft: 17349 corp: 9/65b lim: 8 exec/s: 25 rss: 74Mb L: 8/8 MS: 1 CopyPart- 00:07:26.630 [2024-07-15 16:24:06.052728] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:26.630 #28 NEW cov: 10980 ft: 17619 corp: 10/73b lim: 8 exec/s: 28 rss: 74Mb L: 8/8 MS: 3 CrossOver-InsertByte-InsertByte- 00:07:26.937 [2024-07-15 16:24:06.226507] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:26.937 #29 NEW cov: 10980 ft: 17700 corp: 11/81b lim: 8 exec/s: 14 rss: 74Mb L: 8/8 MS: 1 ChangeBit- 00:07:26.937 #29 DONE cov: 10980 ft: 17700 corp: 11/81b lim: 8 exec/s: 14 rss: 74Mb 00:07:26.937 Done 29 runs in 2 second(s) 00:07:26.937 [2024-07-15 16:24:06.352636] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:07:27.236 16:24:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:07:27.236 16:24:06 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:27.236 16:24:06 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:27.236 16:24:06 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:27.236 16:24:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:07:27.236 16:24:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:27.236 16:24:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:27.236 16:24:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:27.236 16:24:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:07:27.236 16:24:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:07:27.236 16:24:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:07:27.236 16:24:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:07:27.236 16:24:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:27.236 16:24:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:27.236 16:24:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:27.236 16:24:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:07:27.236 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:27.237 16:24:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:27.237 16:24:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:27.237 16:24:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:07:27.237 [2024-07-15 16:24:06.650463] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:07:27.237 [2024-07-15 16:24:06.650535] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2035436 ] 00:07:27.237 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.237 [2024-07-15 16:24:06.723190] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.237 [2024-07-15 16:24:06.792901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.495 INFO: Running with entropic power schedule (0xFF, 100). 00:07:27.495 INFO: Seed: 3527743562 00:07:27.495 INFO: Loaded 1 modules (355049 inline 8-bit counters): 355049 [0x296c90c, 0x29c33f5), 00:07:27.495 INFO: Loaded 1 PC tables (355049 PCs): 355049 [0x29c33f8,0x2f2e288), 00:07:27.495 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:27.495 INFO: A corpus is not provided, starting from an empty corpus 00:07:27.495 #2 INITED exec/s: 0 rss: 64Mb 00:07:27.495 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:27.495 This may also happen if the target rejected all inputs we tried so far 00:07:27.495 [2024-07-15 16:24:07.029215] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:07:28.010 NEW_FUNC[1/659]: 0x484f00 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:07:28.010 NEW_FUNC[2/659]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:28.010 #39 NEW cov: 10947 ft: 10915 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:28.269 #59 NEW cov: 10961 ft: 13916 corp: 3/65b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 5 ShuffleBytes-CrossOver-ChangeByte-InsertRepeatedBytes-CopyPart- 00:07:28.527 NEW_FUNC[1/1]: 0x1a4a600 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:28.527 #60 NEW cov: 10981 ft: 15638 corp: 4/97b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 CopyPart- 00:07:28.527 #66 NEW cov: 10981 ft: 16231 corp: 5/129b lim: 32 exec/s: 66 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:28.785 #67 NEW cov: 10981 ft: 16606 corp: 6/161b lim: 32 exec/s: 67 rss: 76Mb L: 32/32 MS: 1 CrossOver- 00:07:29.042 #73 NEW cov: 10981 ft: 16801 corp: 7/193b lim: 32 exec/s: 73 rss: 76Mb L: 32/32 MS: 1 CopyPart- 00:07:29.042 #74 NEW cov: 10981 ft: 17236 corp: 8/225b lim: 32 exec/s: 74 rss: 76Mb L: 32/32 MS: 1 CrossOver- 00:07:29.300 #75 NEW cov: 10988 ft: 17296 corp: 9/257b lim: 32 exec/s: 75 rss: 76Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:29.558 #76 NEW cov: 10988 ft: 17530 corp: 10/289b lim: 32 exec/s: 38 rss: 76Mb L: 32/32 MS: 1 CopyPart- 00:07:29.558 #76 DONE cov: 10988 ft: 17530 corp: 10/289b lim: 32 exec/s: 38 rss: 76Mb 00:07:29.558 Done 76 runs in 2 second(s) 00:07:29.558 [2024-07-15 16:24:09.019628] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:07:29.816 16:24:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:07:29.816 16:24:09 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:29.816 16:24:09 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:29.816 16:24:09 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:29.816 16:24:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:07:29.817 16:24:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:29.817 16:24:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:29.817 16:24:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:29.817 16:24:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:07:29.817 16:24:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:07:29.817 16:24:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:07:29.817 16:24:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:07:29.817 16:24:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:29.817 16:24:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:29.817 16:24:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:29.817 16:24:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:07:29.817 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:29.817 16:24:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:29.817 16:24:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:29.817 16:24:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:07:29.817 [2024-07-15 16:24:09.311455] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:07:29.817 [2024-07-15 16:24:09.311522] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2035969 ] 00:07:29.817 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.817 [2024-07-15 16:24:09.384241] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.075 [2024-07-15 16:24:09.456327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.075 INFO: Running with entropic power schedule (0xFF, 100). 00:07:30.075 INFO: Seed: 1896774931 00:07:30.075 INFO: Loaded 1 modules (355049 inline 8-bit counters): 355049 [0x296c90c, 0x29c33f5), 00:07:30.075 INFO: Loaded 1 PC tables (355049 PCs): 355049 [0x29c33f8,0x2f2e288), 00:07:30.075 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:30.075 INFO: A corpus is not provided, starting from an empty corpus 00:07:30.075 #2 INITED exec/s: 0 rss: 64Mb 00:07:30.075 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:30.075 This may also happen if the target rejected all inputs we tried so far 00:07:30.334 [2024-07-15 16:24:09.689212] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:07:30.592 NEW_FUNC[1/658]: 0x485780 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:07:30.592 NEW_FUNC[2/658]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:30.592 #181 NEW cov: 10947 ft: 10832 corp: 2/33b lim: 32 exec/s: 0 rss: 71Mb L: 32/32 MS: 4 ChangeBit-InsertByte-InsertRepeatedBytes-InsertByte- 00:07:30.852 #187 NEW cov: 10961 ft: 13759 corp: 3/65b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:31.111 NEW_FUNC[1/1]: 0x1a4a600 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:31.111 #188 NEW cov: 10978 ft: 15548 corp: 4/97b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:31.111 #189 NEW cov: 10978 ft: 15996 corp: 5/129b lim: 32 exec/s: 189 rss: 74Mb L: 32/32 MS: 1 CMP- DE: "\377\377\377\377\004\220\2523"- 00:07:31.371 #190 NEW cov: 10978 ft: 16408 corp: 6/161b lim: 32 exec/s: 190 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:31.630 #191 NEW cov: 10978 ft: 16669 corp: 7/193b lim: 32 exec/s: 191 rss: 74Mb L: 32/32 MS: 1 CopyPart- 00:07:31.889 #192 NEW cov: 10978 ft: 17183 corp: 8/225b lim: 32 exec/s: 192 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:31.889 #193 NEW cov: 10978 ft: 17355 corp: 9/257b lim: 32 exec/s: 193 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:32.148 #194 NEW cov: 10985 ft: 17476 corp: 10/289b lim: 32 exec/s: 194 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:32.407 #195 NEW cov: 10985 ft: 17675 corp: 11/321b lim: 32 exec/s: 97 rss: 74Mb L: 32/32 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\001"- 00:07:32.407 #195 DONE cov: 10985 ft: 17675 corp: 11/321b lim: 32 exec/s: 97 rss: 74Mb 00:07:32.407 ###### Recommended dictionary. ###### 00:07:32.407 "\377\377\377\377\004\220\2523" # Uses: 0 00:07:32.407 "\000\000\000\000\000\000\000\001" # Uses: 0 00:07:32.407 ###### End of recommended dictionary. ###### 00:07:32.407 Done 195 runs in 2 second(s) 00:07:32.407 [2024-07-15 16:24:11.817634] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:07:32.666 16:24:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:07:32.666 16:24:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:32.666 16:24:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:32.666 16:24:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:32.666 16:24:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:07:32.666 16:24:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:32.666 16:24:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:32.666 16:24:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:32.666 16:24:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:07:32.666 16:24:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:07:32.666 16:24:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:07:32.666 16:24:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:07:32.666 16:24:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:32.666 16:24:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:32.666 16:24:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:32.666 16:24:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:07:32.666 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:32.666 16:24:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:32.666 16:24:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:32.666 16:24:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:07:32.666 [2024-07-15 16:24:12.109589] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:07:32.666 [2024-07-15 16:24:12.109671] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2036505 ] 00:07:32.666 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.666 [2024-07-15 16:24:12.183346] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.666 [2024-07-15 16:24:12.252308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.925 INFO: Running with entropic power schedule (0xFF, 100). 00:07:32.925 INFO: Seed: 403813393 00:07:32.925 INFO: Loaded 1 modules (355049 inline 8-bit counters): 355049 [0x296c90c, 0x29c33f5), 00:07:32.925 INFO: Loaded 1 PC tables (355049 PCs): 355049 [0x29c33f8,0x2f2e288), 00:07:32.925 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:32.925 INFO: A corpus is not provided, starting from an empty corpus 00:07:32.925 #2 INITED exec/s: 0 rss: 64Mb 00:07:32.925 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:32.925 This may also happen if the target rejected all inputs we tried so far 00:07:32.925 [2024-07-15 16:24:12.487983] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:07:33.184 [2024-07-15 16:24:12.543486] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:33.184 [2024-07-15 16:24:12.543522] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:33.444 NEW_FUNC[1/660]: 0x486180 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:07:33.444 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:33.444 #47 NEW cov: 10956 ft: 10736 corp: 2/14b lim: 13 exec/s: 0 rss: 71Mb L: 13/13 MS: 5 CrossOver-InsertByte-InsertRepeatedBytes-CrossOver-CopyPart- 00:07:33.444 [2024-07-15 16:24:13.023412] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:33.444 [2024-07-15 16:24:13.023463] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:33.702 #86 NEW cov: 10975 ft: 13271 corp: 3/27b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 4 CrossOver-CrossOver-InsertRepeatedBytes-InsertByte- 00:07:33.702 [2024-07-15 16:24:13.227652] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:33.702 [2024-07-15 16:24:13.227683] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:33.961 NEW_FUNC[1/1]: 0x1a4a600 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:33.961 #97 NEW cov: 10992 ft: 15577 corp: 4/40b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 ChangeBit- 00:07:33.961 [2024-07-15 16:24:13.421197] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:33.961 [2024-07-15 16:24:13.421226] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:33.961 #98 NEW cov: 10992 ft: 16400 corp: 5/53b lim: 13 exec/s: 98 rss: 74Mb L: 13/13 MS: 1 ChangeBinInt- 00:07:34.220 [2024-07-15 16:24:13.608688] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:34.220 [2024-07-15 16:24:13.608718] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:34.220 #99 NEW cov: 10992 ft: 16975 corp: 6/66b lim: 13 exec/s: 99 rss: 74Mb L: 13/13 MS: 1 CrossOver- 00:07:34.220 [2024-07-15 16:24:13.791145] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:34.220 [2024-07-15 16:24:13.791175] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:34.479 #100 NEW cov: 10992 ft: 17390 corp: 7/79b lim: 13 exec/s: 100 rss: 74Mb L: 13/13 MS: 1 ChangeBit- 00:07:34.479 [2024-07-15 16:24:13.975917] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:34.479 [2024-07-15 16:24:13.975946] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:34.738 #101 NEW cov: 10992 ft: 17798 corp: 8/92b lim: 13 exec/s: 101 rss: 74Mb L: 13/13 MS: 1 ChangeBit- 00:07:34.738 [2024-07-15 16:24:14.171053] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:34.738 [2024-07-15 16:24:14.171083] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:34.738 #102 NEW cov: 10999 ft: 17994 corp: 9/105b lim: 13 exec/s: 102 rss: 74Mb L: 13/13 MS: 1 CrossOver- 00:07:34.997 [2024-07-15 16:24:14.361653] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:34.997 [2024-07-15 16:24:14.361682] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:34.997 #103 NEW cov: 10999 ft: 18157 corp: 10/118b lim: 13 exec/s: 51 rss: 74Mb L: 13/13 MS: 1 ChangeByte- 00:07:34.997 #103 DONE cov: 10999 ft: 18157 corp: 10/118b lim: 13 exec/s: 51 rss: 74Mb 00:07:34.997 Done 103 runs in 2 second(s) 00:07:34.997 [2024-07-15 16:24:14.496656] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:07:35.257 16:24:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:07:35.257 16:24:14 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:35.257 16:24:14 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:35.257 16:24:14 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:35.257 16:24:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:07:35.257 16:24:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:35.257 16:24:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:35.257 16:24:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:35.257 16:24:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:07:35.257 16:24:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:07:35.257 16:24:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:07:35.257 16:24:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:07:35.257 16:24:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:35.257 16:24:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:35.257 16:24:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:35.257 16:24:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:07:35.257 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:35.257 16:24:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:35.257 16:24:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:35.257 16:24:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:07:35.257 [2024-07-15 16:24:14.789173] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:07:35.257 [2024-07-15 16:24:14.789256] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2036883 ] 00:07:35.257 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.516 [2024-07-15 16:24:14.863220] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.516 [2024-07-15 16:24:14.934752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.776 INFO: Running with entropic power schedule (0xFF, 100). 00:07:35.776 INFO: Seed: 3084819621 00:07:35.776 INFO: Loaded 1 modules (355049 inline 8-bit counters): 355049 [0x296c90c, 0x29c33f5), 00:07:35.776 INFO: Loaded 1 PC tables (355049 PCs): 355049 [0x29c33f8,0x2f2e288), 00:07:35.776 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:35.776 INFO: A corpus is not provided, starting from an empty corpus 00:07:35.776 #2 INITED exec/s: 0 rss: 64Mb 00:07:35.776 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:35.776 This may also happen if the target rejected all inputs we tried so far 00:07:35.776 [2024-07-15 16:24:15.173059] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:07:35.776 [2024-07-15 16:24:15.224484] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:35.776 [2024-07-15 16:24:15.224515] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:36.036 NEW_FUNC[1/659]: 0x486e70 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:07:36.036 NEW_FUNC[2/659]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:36.036 #11 NEW cov: 10948 ft: 10755 corp: 2/10b lim: 9 exec/s: 0 rss: 72Mb L: 9/9 MS: 4 InsertByte-InsertRepeatedBytes-ShuffleBytes-CMP- DE: "\003\000\000\000"- 00:07:36.295 [2024-07-15 16:24:15.689023] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:36.295 [2024-07-15 16:24:15.689065] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:36.295 NEW_FUNC[1/1]: 0x1d5d360 in spdk_get_thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1267 00:07:36.295 #12 NEW cov: 10963 ft: 14847 corp: 3/19b lim: 9 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 CrossOver- 00:07:36.295 [2024-07-15 16:24:15.886217] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:36.295 [2024-07-15 16:24:15.886249] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:36.555 NEW_FUNC[1/1]: 0x1a4a600 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:36.555 #18 NEW cov: 10980 ft: 15388 corp: 4/28b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ChangeBit- 00:07:36.555 [2024-07-15 16:24:16.070580] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:36.555 [2024-07-15 16:24:16.070614] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:36.813 #20 NEW cov: 10980 ft: 16158 corp: 5/37b lim: 9 exec/s: 20 rss: 74Mb L: 9/9 MS: 2 EraseBytes-CrossOver- 00:07:36.813 [2024-07-15 16:24:16.256578] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:36.813 [2024-07-15 16:24:16.256609] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:36.813 #21 NEW cov: 10980 ft: 16410 corp: 6/46b lim: 9 exec/s: 21 rss: 74Mb L: 9/9 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\200"- 00:07:37.072 [2024-07-15 16:24:16.461994] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:37.072 [2024-07-15 16:24:16.462025] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:37.072 #23 NEW cov: 10980 ft: 16579 corp: 7/55b lim: 9 exec/s: 23 rss: 74Mb L: 9/9 MS: 2 EraseBytes-CopyPart- 00:07:37.072 [2024-07-15 16:24:16.646498] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:37.072 [2024-07-15 16:24:16.646528] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:37.331 #24 NEW cov: 10980 ft: 16697 corp: 8/64b lim: 9 exec/s: 24 rss: 74Mb L: 9/9 MS: 1 CMP- DE: "\377\377\377\377"- 00:07:37.331 [2024-07-15 16:24:16.834886] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:37.331 [2024-07-15 16:24:16.834915] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:37.590 #25 NEW cov: 10987 ft: 17408 corp: 9/73b lim: 9 exec/s: 25 rss: 74Mb L: 9/9 MS: 1 ChangeByte- 00:07:37.590 [2024-07-15 16:24:17.021075] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:37.590 [2024-07-15 16:24:17.021104] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:37.590 #26 NEW cov: 10987 ft: 17827 corp: 10/82b lim: 9 exec/s: 26 rss: 75Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:37.849 [2024-07-15 16:24:17.208321] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:37.849 [2024-07-15 16:24:17.208350] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:37.849 #27 NEW cov: 10987 ft: 18162 corp: 11/91b lim: 9 exec/s: 13 rss: 75Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:37.849 #27 DONE cov: 10987 ft: 18162 corp: 11/91b lim: 9 exec/s: 13 rss: 75Mb 00:07:37.849 ###### Recommended dictionary. ###### 00:07:37.849 "\003\000\000\000" # Uses: 0 00:07:37.849 "\001\000\000\000\000\000\000\200" # Uses: 0 00:07:37.849 "\377\377\377\377" # Uses: 0 00:07:37.849 ###### End of recommended dictionary. ###### 00:07:37.849 Done 27 runs in 2 second(s) 00:07:37.849 [2024-07-15 16:24:17.339640] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:07:38.108 16:24:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:07:38.108 16:24:17 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:38.108 16:24:17 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:38.108 16:24:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:07:38.108 00:07:38.108 real 0m19.544s 00:07:38.108 user 0m27.309s 00:07:38.108 sys 0m1.822s 00:07:38.108 16:24:17 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.108 16:24:17 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:38.108 ************************************ 00:07:38.108 END TEST vfio_llvm_fuzz 00:07:38.108 ************************************ 00:07:38.108 16:24:17 llvm_fuzz -- common/autotest_common.sh@1142 -- # return 0 00:07:38.108 16:24:17 llvm_fuzz -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:07:38.108 00:07:38.108 real 1m23.970s 00:07:38.108 user 2m8.118s 00:07:38.108 sys 0m8.985s 00:07:38.108 16:24:17 llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.108 16:24:17 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:38.108 ************************************ 00:07:38.108 END TEST llvm_fuzz 00:07:38.108 ************************************ 00:07:38.108 16:24:17 -- common/autotest_common.sh@1142 -- # return 0 00:07:38.108 16:24:17 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:07:38.108 16:24:17 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:07:38.108 16:24:17 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:07:38.108 16:24:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:38.108 16:24:17 -- common/autotest_common.sh@10 -- # set +x 00:07:38.108 16:24:17 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:07:38.108 16:24:17 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:07:38.108 16:24:17 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:07:38.108 16:24:17 -- common/autotest_common.sh@10 -- # set +x 00:07:44.678 INFO: APP EXITING 00:07:44.678 INFO: killing all VMs 00:07:44.678 INFO: killing vhost app 00:07:44.678 INFO: EXIT DONE 00:07:47.214 Waiting for block devices as requested 00:07:47.214 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:47.214 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:47.214 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:47.214 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:47.214 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:47.214 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:47.214 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:47.214 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:47.474 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:47.474 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:47.474 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:47.733 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:47.733 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:47.733 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:47.992 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:47.992 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:47.992 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:07:52.188 Cleaning 00:07:52.188 Removing: /dev/shm/spdk_tgt_trace.pid2001347 00:07:52.188 Removing: /var/run/dpdk/spdk_pid1998317 00:07:52.188 Removing: /var/run/dpdk/spdk_pid1999654 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2001347 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2002051 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2002887 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2003157 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2004272 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2004301 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2004701 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2005010 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2005335 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2005663 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2005994 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2006279 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2006537 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2006777 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2007701 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2010683 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2011189 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2011488 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2011638 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2012075 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2012337 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2012899 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2012933 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2013387 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2013484 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2013775 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2013830 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2014416 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2014699 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2014898 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2015058 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2015362 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2015397 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2015697 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2015985 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2016214 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2016441 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2016661 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2016888 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2017159 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2017451 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2017730 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2018013 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2018300 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2018584 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2018872 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2019151 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2019426 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2019649 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2019885 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2020115 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2020354 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2020623 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2020912 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2021200 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2021565 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2022048 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2022576 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2023106 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2023402 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2023934 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2024381 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2024759 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2025289 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2025702 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2026110 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2026647 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2026989 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2027471 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2028000 00:07:52.188 Removing: /var/run/dpdk/spdk_pid2028300 00:07:52.189 Removing: /var/run/dpdk/spdk_pid2028826 00:07:52.189 Removing: /var/run/dpdk/spdk_pid2029334 00:07:52.189 Removing: /var/run/dpdk/spdk_pid2029647 00:07:52.189 Removing: /var/run/dpdk/spdk_pid2030195 00:07:52.189 Removing: /var/run/dpdk/spdk_pid2030686 00:07:52.189 Removing: /var/run/dpdk/spdk_pid2031019 00:07:52.189 Removing: /var/run/dpdk/spdk_pid2031546 00:07:52.189 Removing: /var/run/dpdk/spdk_pid2032000 00:07:52.189 Removing: /var/run/dpdk/spdk_pid2032373 00:07:52.189 Removing: /var/run/dpdk/spdk_pid2032908 00:07:52.189 Removing: /var/run/dpdk/spdk_pid2033506 00:07:52.189 Removing: /var/run/dpdk/spdk_pid2034014 00:07:52.189 Removing: /var/run/dpdk/spdk_pid2034478 00:07:52.189 Removing: /var/run/dpdk/spdk_pid2035436 00:07:52.189 Removing: /var/run/dpdk/spdk_pid2035969 00:07:52.189 Removing: /var/run/dpdk/spdk_pid2036505 00:07:52.189 Removing: /var/run/dpdk/spdk_pid2036883 00:07:52.189 Clean 00:07:52.189 16:24:31 -- common/autotest_common.sh@1451 -- # return 0 00:07:52.189 16:24:31 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:07:52.189 16:24:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:52.189 16:24:31 -- common/autotest_common.sh@10 -- # set +x 00:07:52.189 16:24:31 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:07:52.189 16:24:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:52.189 16:24:31 -- common/autotest_common.sh@10 -- # set +x 00:07:52.189 16:24:31 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:07:52.189 16:24:31 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:07:52.189 16:24:31 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:07:52.189 16:24:31 -- spdk/autotest.sh@391 -- # hash lcov 00:07:52.189 16:24:31 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:07:52.189 16:24:31 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:52.189 16:24:31 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:07:52.189 16:24:31 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.189 16:24:31 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.189 16:24:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.189 16:24:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.189 16:24:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.189 16:24:31 -- paths/export.sh@5 -- $ export PATH 00:07:52.189 16:24:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.189 16:24:31 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:07:52.189 16:24:31 -- common/autobuild_common.sh@444 -- $ date +%s 00:07:52.189 16:24:31 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721053471.XXXXXX 00:07:52.189 16:24:31 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721053471.vl08Tt 00:07:52.189 16:24:31 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:07:52.189 16:24:31 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:07:52.189 16:24:31 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:07:52.189 16:24:31 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:07:52.189 16:24:31 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:07:52.189 16:24:31 -- common/autobuild_common.sh@460 -- $ get_config_params 00:07:52.189 16:24:31 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:07:52.189 16:24:31 -- common/autotest_common.sh@10 -- $ set +x 00:07:52.189 16:24:31 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:07:52.189 16:24:31 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:07:52.189 16:24:31 -- pm/common@17 -- $ local monitor 00:07:52.189 16:24:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:52.189 16:24:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:52.189 16:24:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:52.189 16:24:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:52.189 16:24:31 -- pm/common@25 -- $ sleep 1 00:07:52.189 16:24:31 -- pm/common@21 -- $ date +%s 00:07:52.189 16:24:31 -- pm/common@21 -- $ date +%s 00:07:52.189 16:24:31 -- pm/common@21 -- $ date +%s 00:07:52.189 16:24:31 -- pm/common@21 -- $ date +%s 00:07:52.189 16:24:31 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721053471 00:07:52.189 16:24:31 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721053471 00:07:52.189 16:24:31 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721053471 00:07:52.189 16:24:31 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721053471 00:07:52.189 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721053471_collect-vmstat.pm.log 00:07:52.189 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721053471_collect-cpu-load.pm.log 00:07:52.189 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721053471_collect-cpu-temp.pm.log 00:07:52.189 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721053471_collect-bmc-pm.bmc.pm.log 00:07:53.123 16:24:32 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:07:53.124 16:24:32 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:07:53.124 16:24:32 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:53.124 16:24:32 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:07:53.124 16:24:32 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:07:53.124 16:24:32 -- spdk/autopackage.sh@19 -- $ timing_finish 00:07:53.124 16:24:32 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:07:53.124 16:24:32 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:07:53.124 16:24:32 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:07:53.124 16:24:32 -- spdk/autopackage.sh@20 -- $ exit 0 00:07:53.124 16:24:32 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:07:53.124 16:24:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:53.124 16:24:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:53.124 16:24:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:53.124 16:24:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:07:53.124 16:24:32 -- pm/common@44 -- $ pid=2043814 00:07:53.124 16:24:32 -- pm/common@50 -- $ kill -TERM 2043814 00:07:53.124 16:24:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:53.124 16:24:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:07:53.124 16:24:32 -- pm/common@44 -- $ pid=2043815 00:07:53.124 16:24:32 -- pm/common@50 -- $ kill -TERM 2043815 00:07:53.124 16:24:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:53.124 16:24:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:07:53.124 16:24:32 -- pm/common@44 -- $ pid=2043818 00:07:53.124 16:24:32 -- pm/common@50 -- $ kill -TERM 2043818 00:07:53.124 16:24:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:53.124 16:24:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:07:53.124 16:24:32 -- pm/common@44 -- $ pid=2043853 00:07:53.124 16:24:32 -- pm/common@50 -- $ sudo -E kill -TERM 2043853 00:07:53.124 + [[ -n 1891400 ]] 00:07:53.124 + sudo kill 1891400 00:07:53.132 [Pipeline] } 00:07:53.150 [Pipeline] // stage 00:07:53.154 [Pipeline] } 00:07:53.169 [Pipeline] // timeout 00:07:53.173 [Pipeline] } 00:07:53.187 [Pipeline] // catchError 00:07:53.191 [Pipeline] } 00:07:53.205 [Pipeline] // wrap 00:07:53.209 [Pipeline] } 00:07:53.221 [Pipeline] // catchError 00:07:53.226 [Pipeline] stage 00:07:53.228 [Pipeline] { (Epilogue) 00:07:53.238 [Pipeline] catchError 00:07:53.239 [Pipeline] { 00:07:53.252 [Pipeline] echo 00:07:53.254 Cleanup processes 00:07:53.258 [Pipeline] sh 00:07:53.538 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:53.538 1954158 sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721053132 00:07:53.538 1954210 bash /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721053132 00:07:53.538 2043962 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:07:53.538 2044816 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:53.551 [Pipeline] sh 00:07:53.914 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:53.914 ++ grep -v 'sudo pgrep' 00:07:53.914 ++ awk '{print $1}' 00:07:53.914 + sudo kill -9 1954158 1954210 2043962 00:07:53.923 [Pipeline] sh 00:07:54.201 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:07:54.201 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:07:54.201 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:07:55.589 [Pipeline] sh 00:07:55.873 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:07:55.873 Artifacts sizes are good 00:07:55.886 [Pipeline] archiveArtifacts 00:07:55.892 Archiving artifacts 00:07:55.959 [Pipeline] sh 00:07:56.243 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:07:56.263 [Pipeline] cleanWs 00:07:56.276 [WS-CLEANUP] Deleting project workspace... 00:07:56.276 [WS-CLEANUP] Deferred wipeout is used... 00:07:56.284 [WS-CLEANUP] done 00:07:56.287 [Pipeline] } 00:07:56.309 [Pipeline] // catchError 00:07:56.322 [Pipeline] sh 00:07:56.603 + logger -p user.info -t JENKINS-CI 00:07:56.613 [Pipeline] } 00:07:56.633 [Pipeline] // stage 00:07:56.640 [Pipeline] } 00:07:56.658 [Pipeline] // node 00:07:56.664 [Pipeline] End of Pipeline 00:07:56.697 Finished: SUCCESS