00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2362 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3623 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.027 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.030 The recommended git tool is: git 00:00:00.031 using credential 00000000-0000-0000-0000-000000000002 00:00:00.036 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.051 Fetching changes from the remote Git repository 00:00:00.053 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.067 Using shallow fetch with depth 1 00:00:00.067 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.067 > git --version # timeout=10 00:00:00.080 > git --version # 'git version 2.39.2' 00:00:00.080 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.094 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.094 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.245 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.256 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.266 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:03.266 > git config core.sparsecheckout # timeout=10 00:00:03.274 > git read-tree -mu HEAD # timeout=10 00:00:03.291 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:03.309 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:03.309 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:03.389 [Pipeline] Start of Pipeline 00:00:03.406 [Pipeline] library 00:00:03.408 Loading library shm_lib@master 00:00:03.408 Library shm_lib@master is cached. Copying from home. 00:00:03.432 [Pipeline] node 00:00:03.451 Running on WFP20 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:03.453 [Pipeline] { 00:00:03.460 [Pipeline] catchError 00:00:03.461 [Pipeline] { 00:00:03.470 [Pipeline] wrap 00:00:03.476 [Pipeline] { 00:00:03.480 [Pipeline] stage 00:00:03.481 [Pipeline] { (Prologue) 00:00:03.667 [Pipeline] sh 00:00:03.956 + logger -p user.info -t JENKINS-CI 00:00:03.976 [Pipeline] echo 00:00:03.977 Node: WFP20 00:00:03.985 [Pipeline] sh 00:00:04.284 [Pipeline] setCustomBuildProperty 00:00:04.292 [Pipeline] echo 00:00:04.293 Cleanup processes 00:00:04.297 [Pipeline] sh 00:00:04.579 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.579 247357 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.590 [Pipeline] sh 00:00:04.870 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.870 ++ grep -v 'sudo pgrep' 00:00:04.870 ++ awk '{print $1}' 00:00:04.870 + sudo kill -9 00:00:04.870 + true 00:00:04.883 [Pipeline] cleanWs 00:00:04.892 [WS-CLEANUP] Deleting project workspace... 00:00:04.892 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.897 [WS-CLEANUP] done 00:00:04.901 [Pipeline] setCustomBuildProperty 00:00:04.913 [Pipeline] sh 00:00:05.192 + sudo git config --global --replace-all safe.directory '*' 00:00:05.260 [Pipeline] httpRequest 00:00:05.633 [Pipeline] echo 00:00:05.634 Sorcerer 10.211.164.101 is alive 00:00:05.641 [Pipeline] retry 00:00:05.642 [Pipeline] { 00:00:05.651 [Pipeline] httpRequest 00:00:05.654 HttpMethod: GET 00:00:05.655 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.655 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.668 Response Code: HTTP/1.1 200 OK 00:00:05.669 Success: Status code 200 is in the accepted range: 200,404 00:00:05.669 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.204 [Pipeline] } 00:00:08.220 [Pipeline] // retry 00:00:08.227 [Pipeline] sh 00:00:08.511 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.525 [Pipeline] httpRequest 00:00:08.911 [Pipeline] echo 00:00:08.912 Sorcerer 10.211.164.101 is alive 00:00:08.923 [Pipeline] retry 00:00:08.925 [Pipeline] { 00:00:08.940 [Pipeline] httpRequest 00:00:08.944 HttpMethod: GET 00:00:08.944 URL: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:08.945 Sending request to url: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:08.966 Response Code: HTTP/1.1 200 OK 00:00:08.966 Success: Status code 200 is in the accepted range: 200,404 00:00:08.966 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:07.630 [Pipeline] } 00:01:07.648 [Pipeline] // retry 00:01:07.655 [Pipeline] sh 00:01:07.941 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:10.490 [Pipeline] sh 00:01:10.773 + git -C spdk log --oneline -n5 00:01:10.773 c13c99a5e test: Various fixes for Fedora40 00:01:10.773 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:10.773 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:10.773 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:10.773 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:10.784 [Pipeline] } 00:01:10.797 [Pipeline] // stage 00:01:10.806 [Pipeline] stage 00:01:10.808 [Pipeline] { (Prepare) 00:01:10.824 [Pipeline] writeFile 00:01:10.838 [Pipeline] sh 00:01:11.122 + logger -p user.info -t JENKINS-CI 00:01:11.135 [Pipeline] sh 00:01:11.420 + logger -p user.info -t JENKINS-CI 00:01:11.431 [Pipeline] sh 00:01:11.770 + cat autorun-spdk.conf 00:01:11.770 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.770 SPDK_TEST_FUZZER_SHORT=1 00:01:11.770 SPDK_TEST_FUZZER=1 00:01:11.770 SPDK_RUN_UBSAN=1 00:01:11.778 RUN_NIGHTLY=1 00:01:11.782 [Pipeline] readFile 00:01:11.805 [Pipeline] withEnv 00:01:11.808 [Pipeline] { 00:01:11.819 [Pipeline] sh 00:01:12.104 + set -ex 00:01:12.104 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:01:12.104 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:12.104 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.104 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:12.104 ++ SPDK_TEST_FUZZER=1 00:01:12.104 ++ SPDK_RUN_UBSAN=1 00:01:12.104 ++ RUN_NIGHTLY=1 00:01:12.104 + case $SPDK_TEST_NVMF_NICS in 00:01:12.104 + DRIVERS= 00:01:12.104 + [[ -n '' ]] 00:01:12.104 + exit 0 00:01:12.114 [Pipeline] } 00:01:12.128 [Pipeline] // withEnv 00:01:12.133 [Pipeline] } 00:01:12.148 [Pipeline] // stage 00:01:12.157 [Pipeline] catchError 00:01:12.159 [Pipeline] { 00:01:12.172 [Pipeline] timeout 00:01:12.173 Timeout set to expire in 30 min 00:01:12.174 [Pipeline] { 00:01:12.188 [Pipeline] stage 00:01:12.190 [Pipeline] { (Tests) 00:01:12.204 [Pipeline] sh 00:01:12.588 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:12.588 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:12.588 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:01:12.588 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:01:12.588 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:12.588 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:12.588 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:01:12.588 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:12.588 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:12.588 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:12.588 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:01:12.588 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:12.588 + source /etc/os-release 00:01:12.588 ++ NAME='Fedora Linux' 00:01:12.588 ++ VERSION='39 (Cloud Edition)' 00:01:12.588 ++ ID=fedora 00:01:12.588 ++ VERSION_ID=39 00:01:12.588 ++ VERSION_CODENAME= 00:01:12.588 ++ PLATFORM_ID=platform:f39 00:01:12.588 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:12.588 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:12.588 ++ LOGO=fedora-logo-icon 00:01:12.588 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:12.588 ++ HOME_URL=https://fedoraproject.org/ 00:01:12.588 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:12.588 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:12.588 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:12.588 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:12.588 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:12.588 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:12.588 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:12.588 ++ SUPPORT_END=2024-11-12 00:01:12.588 ++ VARIANT='Cloud Edition' 00:01:12.588 ++ VARIANT_ID=cloud 00:01:12.588 + uname -a 00:01:12.588 Linux spdk-wfp-20 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:12.588 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:01:15.883 Hugepages 00:01:15.883 node hugesize free / total 00:01:15.883 node0 1048576kB 0 / 0 00:01:15.883 node0 2048kB 0 / 0 00:01:15.883 node1 1048576kB 0 / 0 00:01:15.883 node1 2048kB 0 / 0 00:01:15.883 00:01:15.883 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:15.883 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:15.883 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:15.883 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:15.883 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:15.883 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:15.883 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:15.883 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:15.883 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:15.883 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:15.883 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:15.883 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:15.883 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:15.883 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:15.883 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:15.883 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:15.883 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:15.883 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:15.883 + rm -f /tmp/spdk-ld-path 00:01:15.883 + source autorun-spdk.conf 00:01:15.883 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.883 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:15.883 ++ SPDK_TEST_FUZZER=1 00:01:15.883 ++ SPDK_RUN_UBSAN=1 00:01:15.883 ++ RUN_NIGHTLY=1 00:01:15.883 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:15.883 + [[ -n '' ]] 00:01:15.883 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:15.883 + for M in /var/spdk/build-*-manifest.txt 00:01:15.883 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:15.883 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:15.883 + for M in /var/spdk/build-*-manifest.txt 00:01:15.883 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:15.883 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:15.883 + for M in /var/spdk/build-*-manifest.txt 00:01:15.883 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:15.883 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:15.883 ++ uname 00:01:15.883 + [[ Linux == \L\i\n\u\x ]] 00:01:15.883 + sudo dmesg -T 00:01:15.883 + sudo dmesg --clear 00:01:15.883 + dmesg_pid=248248 00:01:15.883 + [[ Fedora Linux == FreeBSD ]] 00:01:15.883 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:15.883 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:15.883 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:15.883 + [[ -x /usr/src/fio-static/fio ]] 00:01:15.883 + export FIO_BIN=/usr/src/fio-static/fio 00:01:15.883 + FIO_BIN=/usr/src/fio-static/fio 00:01:15.883 + sudo dmesg -Tw 00:01:15.883 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:15.883 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:15.883 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:15.883 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:15.883 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:15.883 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:15.883 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:15.883 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:15.883 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:15.883 Test configuration: 00:01:15.883 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.883 SPDK_TEST_FUZZER_SHORT=1 00:01:15.883 SPDK_TEST_FUZZER=1 00:01:15.883 SPDK_RUN_UBSAN=1 00:01:15.883 RUN_NIGHTLY=1 17:10:35 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:01:15.883 17:10:35 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:01:15.883 17:10:35 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:15.883 17:10:35 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:15.883 17:10:35 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:15.883 17:10:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.883 17:10:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.883 17:10:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.883 17:10:35 -- paths/export.sh@5 -- $ export PATH 00:01:15.883 17:10:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.883 17:10:35 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:01:15.883 17:10:35 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:15.883 17:10:35 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731168635.XXXXXX 00:01:15.883 17:10:35 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731168635.6gJ96u 00:01:15.883 17:10:35 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:15.883 17:10:35 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:01:15.883 17:10:35 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:01:15.884 17:10:35 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:15.884 17:10:35 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:15.884 17:10:35 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:15.884 17:10:35 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:01:15.884 17:10:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.884 17:10:35 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:15.884 17:10:35 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:15.884 17:10:35 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:15.884 17:10:35 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:15.884 17:10:35 -- spdk/autobuild.sh@16 -- $ date -u 00:01:15.884 Sat Nov 9 04:10:35 PM UTC 2024 00:01:15.884 17:10:35 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:15.884 LTS-67-gc13c99a5e 00:01:15.884 17:10:35 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:15.884 17:10:35 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:15.884 17:10:35 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:15.884 17:10:35 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:15.884 17:10:35 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:15.884 17:10:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.884 ************************************ 00:01:15.884 START TEST ubsan 00:01:15.884 ************************************ 00:01:15.884 17:10:35 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:01:15.884 using ubsan 00:01:15.884 00:01:15.884 real 0m0.000s 00:01:15.884 user 0m0.000s 00:01:15.884 sys 0m0.000s 00:01:15.884 17:10:35 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:15.884 17:10:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.884 ************************************ 00:01:15.884 END TEST ubsan 00:01:15.884 ************************************ 00:01:15.884 17:10:35 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:15.884 17:10:35 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:15.884 17:10:35 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:15.884 17:10:35 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:01:15.884 17:10:35 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:01:15.884 17:10:35 -- common/autobuild_common.sh@428 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:01:15.884 17:10:35 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:01:15.884 17:10:35 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:15.884 17:10:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.884 ************************************ 00:01:15.884 START TEST autobuild_llvm_precompile 00:01:15.884 ************************************ 00:01:15.884 17:10:35 -- common/autotest_common.sh@1114 -- $ _llvm_precompile 00:01:15.884 17:10:35 -- common/autobuild_common.sh@32 -- $ clang --version 00:01:15.884 17:10:35 -- common/autobuild_common.sh@32 -- $ [[ clang version 17.0.6 (Fedora 17.0.6-2.fc39) 00:01:15.884 Target: x86_64-redhat-linux-gnu 00:01:15.884 Thread model: posix 00:01:15.884 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:01:15.884 17:10:35 -- common/autobuild_common.sh@33 -- $ clang_num=17 00:01:15.884 17:10:35 -- common/autobuild_common.sh@35 -- $ export CC=clang-17 00:01:15.884 17:10:35 -- common/autobuild_common.sh@35 -- $ CC=clang-17 00:01:15.884 17:10:35 -- common/autobuild_common.sh@36 -- $ export CXX=clang++-17 00:01:15.884 17:10:35 -- common/autobuild_common.sh@36 -- $ CXX=clang++-17 00:01:15.884 17:10:35 -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:01:15.884 17:10:35 -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:15.884 17:10:35 -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a ]] 00:01:15.884 17:10:35 -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a' 00:01:15.884 17:10:35 -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:16.143 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:16.143 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:16.712 Using 'verbs' RDMA provider 00:01:32.175 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:44.396 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:44.965 Creating mk/config.mk...done. 00:01:44.965 Creating mk/cc.flags.mk...done. 00:01:44.965 Type 'make' to build. 00:01:44.965 00:01:44.965 real 0m28.874s 00:01:44.965 user 0m12.726s 00:01:44.965 sys 0m15.568s 00:01:44.965 17:11:04 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:44.965 17:11:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.965 ************************************ 00:01:44.965 END TEST autobuild_llvm_precompile 00:01:44.965 ************************************ 00:01:44.965 17:11:04 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:44.965 17:11:04 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:44.965 17:11:04 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:44.965 17:11:04 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:44.965 17:11:04 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:45.225 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:45.225 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:45.484 Using 'verbs' RDMA provider 00:01:58.267 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:02:10.518 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:10.518 Creating mk/config.mk...done. 00:02:10.518 Creating mk/cc.flags.mk...done. 00:02:10.518 Type 'make' to build. 00:02:10.518 17:11:29 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:02:10.518 17:11:29 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:10.518 17:11:29 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:10.518 17:11:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.518 ************************************ 00:02:10.518 START TEST make 00:02:10.518 ************************************ 00:02:10.518 17:11:29 -- common/autotest_common.sh@1114 -- $ make -j112 00:02:10.518 make[1]: Nothing to be done for 'all'. 00:02:11.462 The Meson build system 00:02:11.462 Version: 1.5.0 00:02:11.462 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:02:11.462 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:11.462 Build type: native build 00:02:11.462 Project name: libvfio-user 00:02:11.462 Project version: 0.0.1 00:02:11.462 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:02:11.462 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:02:11.462 Host machine cpu family: x86_64 00:02:11.462 Host machine cpu: x86_64 00:02:11.462 Run-time dependency threads found: YES 00:02:11.462 Library dl found: YES 00:02:11.462 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:11.462 Run-time dependency json-c found: YES 0.17 00:02:11.462 Run-time dependency cmocka found: YES 1.1.7 00:02:11.462 Program pytest-3 found: NO 00:02:11.462 Program flake8 found: NO 00:02:11.462 Program misspell-fixer found: NO 00:02:11.462 Program restructuredtext-lint found: NO 00:02:11.463 Program valgrind found: YES (/usr/bin/valgrind) 00:02:11.463 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:11.463 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:11.463 Compiler for C supports arguments -Wwrite-strings: YES 00:02:11.463 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:11.463 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:11.463 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:11.463 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:11.463 Build targets in project: 8 00:02:11.463 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:11.463 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:11.463 00:02:11.463 libvfio-user 0.0.1 00:02:11.463 00:02:11.463 User defined options 00:02:11.463 buildtype : debug 00:02:11.463 default_library: static 00:02:11.463 libdir : /usr/local/lib 00:02:11.463 00:02:11.463 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:11.721 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:11.721 [1/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:02:11.721 [2/36] Compiling C object samples/lspci.p/lspci.c.o 00:02:11.721 [3/36] Compiling C object samples/null.p/null.c.o 00:02:11.721 [4/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:11.721 [5/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:02:11.721 [6/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:11.721 [7/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:02:11.721 [8/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:11.721 [9/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:02:11.721 [10/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:02:11.721 [11/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:11.721 [12/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:02:11.721 [13/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:11.721 [14/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:11.721 [15/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:11.722 [16/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:11.722 [17/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:11.722 [18/36] Compiling C object test/unit_tests.p/mocks.c.o 00:02:11.722 [19/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:02:11.722 [20/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:11.722 [21/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:11.722 [22/36] Compiling C object samples/server.p/server.c.o 00:02:11.722 [23/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:11.722 [24/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:11.722 [25/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:11.722 [26/36] Compiling C object samples/client.p/client.c.o 00:02:11.722 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:02:11.722 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:11.722 [29/36] Linking static target lib/libvfio-user.a 00:02:11.722 [30/36] Linking target samples/client 00:02:11.980 [31/36] Linking target samples/server 00:02:11.980 [32/36] Linking target test/unit_tests 00:02:11.980 [33/36] Linking target samples/shadow_ioeventfd_server 00:02:11.980 [34/36] Linking target samples/gpio-pci-idio-16 00:02:11.980 [35/36] Linking target samples/null 00:02:11.980 [36/36] Linking target samples/lspci 00:02:11.980 INFO: autodetecting backend as ninja 00:02:11.980 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:11.980 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:12.239 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:12.239 ninja: no work to do. 00:02:17.515 The Meson build system 00:02:17.515 Version: 1.5.0 00:02:17.515 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:02:17.515 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:02:17.515 Build type: native build 00:02:17.515 Program cat found: YES (/usr/bin/cat) 00:02:17.515 Project name: DPDK 00:02:17.515 Project version: 23.11.0 00:02:17.515 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:02:17.515 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:02:17.515 Host machine cpu family: x86_64 00:02:17.515 Host machine cpu: x86_64 00:02:17.515 Message: ## Building in Developer Mode ## 00:02:17.515 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:17.515 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:17.515 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:17.515 Program python3 found: YES (/usr/bin/python3) 00:02:17.515 Program cat found: YES (/usr/bin/cat) 00:02:17.515 Compiler for C supports arguments -march=native: YES 00:02:17.515 Checking for size of "void *" : 8 00:02:17.515 Checking for size of "void *" : 8 (cached) 00:02:17.515 Library m found: YES 00:02:17.515 Library numa found: YES 00:02:17.515 Has header "numaif.h" : YES 00:02:17.515 Library fdt found: NO 00:02:17.515 Library execinfo found: NO 00:02:17.515 Has header "execinfo.h" : YES 00:02:17.515 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:17.515 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:17.515 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:17.515 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:17.515 Run-time dependency openssl found: YES 3.1.1 00:02:17.515 Run-time dependency libpcap found: YES 1.10.4 00:02:17.515 Has header "pcap.h" with dependency libpcap: YES 00:02:17.515 Compiler for C supports arguments -Wcast-qual: YES 00:02:17.515 Compiler for C supports arguments -Wdeprecated: YES 00:02:17.515 Compiler for C supports arguments -Wformat: YES 00:02:17.515 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:17.515 Compiler for C supports arguments -Wformat-security: YES 00:02:17.515 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:17.515 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:17.515 Compiler for C supports arguments -Wnested-externs: YES 00:02:17.515 Compiler for C supports arguments -Wold-style-definition: YES 00:02:17.515 Compiler for C supports arguments -Wpointer-arith: YES 00:02:17.515 Compiler for C supports arguments -Wsign-compare: YES 00:02:17.515 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:17.516 Compiler for C supports arguments -Wundef: YES 00:02:17.516 Compiler for C supports arguments -Wwrite-strings: YES 00:02:17.516 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:17.516 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:17.516 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:17.516 Program objdump found: YES (/usr/bin/objdump) 00:02:17.516 Compiler for C supports arguments -mavx512f: YES 00:02:17.516 Checking if "AVX512 checking" compiles: YES 00:02:17.516 Fetching value of define "__SSE4_2__" : 1 00:02:17.516 Fetching value of define "__AES__" : 1 00:02:17.516 Fetching value of define "__AVX__" : 1 00:02:17.516 Fetching value of define "__AVX2__" : 1 00:02:17.516 Fetching value of define "__AVX512BW__" : 1 00:02:17.516 Fetching value of define "__AVX512CD__" : 1 00:02:17.516 Fetching value of define "__AVX512DQ__" : 1 00:02:17.516 Fetching value of define "__AVX512F__" : 1 00:02:17.516 Fetching value of define "__AVX512VL__" : 1 00:02:17.516 Fetching value of define "__PCLMUL__" : 1 00:02:17.516 Fetching value of define "__RDRND__" : 1 00:02:17.516 Fetching value of define "__RDSEED__" : 1 00:02:17.516 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:17.516 Fetching value of define "__znver1__" : (undefined) 00:02:17.516 Fetching value of define "__znver2__" : (undefined) 00:02:17.516 Fetching value of define "__znver3__" : (undefined) 00:02:17.516 Fetching value of define "__znver4__" : (undefined) 00:02:17.516 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:17.516 Message: lib/log: Defining dependency "log" 00:02:17.516 Message: lib/kvargs: Defining dependency "kvargs" 00:02:17.516 Message: lib/telemetry: Defining dependency "telemetry" 00:02:17.516 Checking for function "getentropy" : NO 00:02:17.516 Message: lib/eal: Defining dependency "eal" 00:02:17.516 Message: lib/ring: Defining dependency "ring" 00:02:17.516 Message: lib/rcu: Defining dependency "rcu" 00:02:17.516 Message: lib/mempool: Defining dependency "mempool" 00:02:17.516 Message: lib/mbuf: Defining dependency "mbuf" 00:02:17.516 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:17.516 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:17.516 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:17.516 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:17.516 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:17.516 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:17.516 Compiler for C supports arguments -mpclmul: YES 00:02:17.516 Compiler for C supports arguments -maes: YES 00:02:17.516 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:17.516 Compiler for C supports arguments -mavx512bw: YES 00:02:17.516 Compiler for C supports arguments -mavx512dq: YES 00:02:17.516 Compiler for C supports arguments -mavx512vl: YES 00:02:17.516 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:17.516 Compiler for C supports arguments -mavx2: YES 00:02:17.516 Compiler for C supports arguments -mavx: YES 00:02:17.516 Message: lib/net: Defining dependency "net" 00:02:17.516 Message: lib/meter: Defining dependency "meter" 00:02:17.516 Message: lib/ethdev: Defining dependency "ethdev" 00:02:17.516 Message: lib/pci: Defining dependency "pci" 00:02:17.516 Message: lib/cmdline: Defining dependency "cmdline" 00:02:17.516 Message: lib/hash: Defining dependency "hash" 00:02:17.516 Message: lib/timer: Defining dependency "timer" 00:02:17.516 Message: lib/compressdev: Defining dependency "compressdev" 00:02:17.516 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:17.516 Message: lib/dmadev: Defining dependency "dmadev" 00:02:17.516 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:17.516 Message: lib/power: Defining dependency "power" 00:02:17.516 Message: lib/reorder: Defining dependency "reorder" 00:02:17.516 Message: lib/security: Defining dependency "security" 00:02:17.516 Has header "linux/userfaultfd.h" : YES 00:02:17.516 Has header "linux/vduse.h" : YES 00:02:17.516 Message: lib/vhost: Defining dependency "vhost" 00:02:17.516 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:17.516 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:17.516 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:17.516 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:17.516 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:17.516 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:17.516 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:17.516 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:17.516 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:17.516 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:17.516 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:17.516 Configuring doxy-api-html.conf using configuration 00:02:17.516 Configuring doxy-api-man.conf using configuration 00:02:17.516 Program mandb found: YES (/usr/bin/mandb) 00:02:17.516 Program sphinx-build found: NO 00:02:17.516 Configuring rte_build_config.h using configuration 00:02:17.516 Message: 00:02:17.516 ================= 00:02:17.516 Applications Enabled 00:02:17.516 ================= 00:02:17.516 00:02:17.516 apps: 00:02:17.516 00:02:17.516 00:02:17.516 Message: 00:02:17.516 ================= 00:02:17.516 Libraries Enabled 00:02:17.516 ================= 00:02:17.516 00:02:17.516 libs: 00:02:17.516 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:17.516 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:17.516 cryptodev, dmadev, power, reorder, security, vhost, 00:02:17.516 00:02:17.516 Message: 00:02:17.516 =============== 00:02:17.516 Drivers Enabled 00:02:17.516 =============== 00:02:17.516 00:02:17.516 common: 00:02:17.516 00:02:17.516 bus: 00:02:17.516 pci, vdev, 00:02:17.516 mempool: 00:02:17.516 ring, 00:02:17.516 dma: 00:02:17.516 00:02:17.516 net: 00:02:17.516 00:02:17.516 crypto: 00:02:17.516 00:02:17.516 compress: 00:02:17.516 00:02:17.516 vdpa: 00:02:17.516 00:02:17.516 00:02:17.516 Message: 00:02:17.516 ================= 00:02:17.516 Content Skipped 00:02:17.516 ================= 00:02:17.516 00:02:17.516 apps: 00:02:17.516 dumpcap: explicitly disabled via build config 00:02:17.516 graph: explicitly disabled via build config 00:02:17.516 pdump: explicitly disabled via build config 00:02:17.516 proc-info: explicitly disabled via build config 00:02:17.516 test-acl: explicitly disabled via build config 00:02:17.516 test-bbdev: explicitly disabled via build config 00:02:17.516 test-cmdline: explicitly disabled via build config 00:02:17.516 test-compress-perf: explicitly disabled via build config 00:02:17.516 test-crypto-perf: explicitly disabled via build config 00:02:17.516 test-dma-perf: explicitly disabled via build config 00:02:17.516 test-eventdev: explicitly disabled via build config 00:02:17.516 test-fib: explicitly disabled via build config 00:02:17.516 test-flow-perf: explicitly disabled via build config 00:02:17.516 test-gpudev: explicitly disabled via build config 00:02:17.516 test-mldev: explicitly disabled via build config 00:02:17.516 test-pipeline: explicitly disabled via build config 00:02:17.516 test-pmd: explicitly disabled via build config 00:02:17.516 test-regex: explicitly disabled via build config 00:02:17.516 test-sad: explicitly disabled via build config 00:02:17.516 test-security-perf: explicitly disabled via build config 00:02:17.516 00:02:17.516 libs: 00:02:17.516 metrics: explicitly disabled via build config 00:02:17.516 acl: explicitly disabled via build config 00:02:17.516 bbdev: explicitly disabled via build config 00:02:17.516 bitratestats: explicitly disabled via build config 00:02:17.516 bpf: explicitly disabled via build config 00:02:17.516 cfgfile: explicitly disabled via build config 00:02:17.516 distributor: explicitly disabled via build config 00:02:17.516 efd: explicitly disabled via build config 00:02:17.516 eventdev: explicitly disabled via build config 00:02:17.516 dispatcher: explicitly disabled via build config 00:02:17.516 gpudev: explicitly disabled via build config 00:02:17.516 gro: explicitly disabled via build config 00:02:17.516 gso: explicitly disabled via build config 00:02:17.516 ip_frag: explicitly disabled via build config 00:02:17.516 jobstats: explicitly disabled via build config 00:02:17.516 latencystats: explicitly disabled via build config 00:02:17.516 lpm: explicitly disabled via build config 00:02:17.516 member: explicitly disabled via build config 00:02:17.516 pcapng: explicitly disabled via build config 00:02:17.516 rawdev: explicitly disabled via build config 00:02:17.516 regexdev: explicitly disabled via build config 00:02:17.516 mldev: explicitly disabled via build config 00:02:17.516 rib: explicitly disabled via build config 00:02:17.516 sched: explicitly disabled via build config 00:02:17.516 stack: explicitly disabled via build config 00:02:17.516 ipsec: explicitly disabled via build config 00:02:17.516 pdcp: explicitly disabled via build config 00:02:17.516 fib: explicitly disabled via build config 00:02:17.516 port: explicitly disabled via build config 00:02:17.516 pdump: explicitly disabled via build config 00:02:17.516 table: explicitly disabled via build config 00:02:17.516 pipeline: explicitly disabled via build config 00:02:17.516 graph: explicitly disabled via build config 00:02:17.516 node: explicitly disabled via build config 00:02:17.516 00:02:17.516 drivers: 00:02:17.516 common/cpt: not in enabled drivers build config 00:02:17.516 common/dpaax: not in enabled drivers build config 00:02:17.516 common/iavf: not in enabled drivers build config 00:02:17.516 common/idpf: not in enabled drivers build config 00:02:17.516 common/mvep: not in enabled drivers build config 00:02:17.516 common/octeontx: not in enabled drivers build config 00:02:17.516 bus/auxiliary: not in enabled drivers build config 00:02:17.516 bus/cdx: not in enabled drivers build config 00:02:17.516 bus/dpaa: not in enabled drivers build config 00:02:17.516 bus/fslmc: not in enabled drivers build config 00:02:17.516 bus/ifpga: not in enabled drivers build config 00:02:17.516 bus/platform: not in enabled drivers build config 00:02:17.517 bus/vmbus: not in enabled drivers build config 00:02:17.517 common/cnxk: not in enabled drivers build config 00:02:17.517 common/mlx5: not in enabled drivers build config 00:02:17.517 common/nfp: not in enabled drivers build config 00:02:17.517 common/qat: not in enabled drivers build config 00:02:17.517 common/sfc_efx: not in enabled drivers build config 00:02:17.517 mempool/bucket: not in enabled drivers build config 00:02:17.517 mempool/cnxk: not in enabled drivers build config 00:02:17.517 mempool/dpaa: not in enabled drivers build config 00:02:17.517 mempool/dpaa2: not in enabled drivers build config 00:02:17.517 mempool/octeontx: not in enabled drivers build config 00:02:17.517 mempool/stack: not in enabled drivers build config 00:02:17.517 dma/cnxk: not in enabled drivers build config 00:02:17.517 dma/dpaa: not in enabled drivers build config 00:02:17.517 dma/dpaa2: not in enabled drivers build config 00:02:17.517 dma/hisilicon: not in enabled drivers build config 00:02:17.517 dma/idxd: not in enabled drivers build config 00:02:17.517 dma/ioat: not in enabled drivers build config 00:02:17.517 dma/skeleton: not in enabled drivers build config 00:02:17.517 net/af_packet: not in enabled drivers build config 00:02:17.517 net/af_xdp: not in enabled drivers build config 00:02:17.517 net/ark: not in enabled drivers build config 00:02:17.517 net/atlantic: not in enabled drivers build config 00:02:17.517 net/avp: not in enabled drivers build config 00:02:17.517 net/axgbe: not in enabled drivers build config 00:02:17.517 net/bnx2x: not in enabled drivers build config 00:02:17.517 net/bnxt: not in enabled drivers build config 00:02:17.517 net/bonding: not in enabled drivers build config 00:02:17.517 net/cnxk: not in enabled drivers build config 00:02:17.517 net/cpfl: not in enabled drivers build config 00:02:17.517 net/cxgbe: not in enabled drivers build config 00:02:17.517 net/dpaa: not in enabled drivers build config 00:02:17.517 net/dpaa2: not in enabled drivers build config 00:02:17.517 net/e1000: not in enabled drivers build config 00:02:17.517 net/ena: not in enabled drivers build config 00:02:17.517 net/enetc: not in enabled drivers build config 00:02:17.517 net/enetfec: not in enabled drivers build config 00:02:17.517 net/enic: not in enabled drivers build config 00:02:17.517 net/failsafe: not in enabled drivers build config 00:02:17.517 net/fm10k: not in enabled drivers build config 00:02:17.517 net/gve: not in enabled drivers build config 00:02:17.517 net/hinic: not in enabled drivers build config 00:02:17.517 net/hns3: not in enabled drivers build config 00:02:17.517 net/i40e: not in enabled drivers build config 00:02:17.517 net/iavf: not in enabled drivers build config 00:02:17.517 net/ice: not in enabled drivers build config 00:02:17.517 net/idpf: not in enabled drivers build config 00:02:17.517 net/igc: not in enabled drivers build config 00:02:17.517 net/ionic: not in enabled drivers build config 00:02:17.517 net/ipn3ke: not in enabled drivers build config 00:02:17.517 net/ixgbe: not in enabled drivers build config 00:02:17.517 net/mana: not in enabled drivers build config 00:02:17.517 net/memif: not in enabled drivers build config 00:02:17.517 net/mlx4: not in enabled drivers build config 00:02:17.517 net/mlx5: not in enabled drivers build config 00:02:17.517 net/mvneta: not in enabled drivers build config 00:02:17.517 net/mvpp2: not in enabled drivers build config 00:02:17.517 net/netvsc: not in enabled drivers build config 00:02:17.517 net/nfb: not in enabled drivers build config 00:02:17.517 net/nfp: not in enabled drivers build config 00:02:17.517 net/ngbe: not in enabled drivers build config 00:02:17.517 net/null: not in enabled drivers build config 00:02:17.517 net/octeontx: not in enabled drivers build config 00:02:17.517 net/octeon_ep: not in enabled drivers build config 00:02:17.517 net/pcap: not in enabled drivers build config 00:02:17.517 net/pfe: not in enabled drivers build config 00:02:17.517 net/qede: not in enabled drivers build config 00:02:17.517 net/ring: not in enabled drivers build config 00:02:17.517 net/sfc: not in enabled drivers build config 00:02:17.517 net/softnic: not in enabled drivers build config 00:02:17.517 net/tap: not in enabled drivers build config 00:02:17.517 net/thunderx: not in enabled drivers build config 00:02:17.517 net/txgbe: not in enabled drivers build config 00:02:17.517 net/vdev_netvsc: not in enabled drivers build config 00:02:17.517 net/vhost: not in enabled drivers build config 00:02:17.517 net/virtio: not in enabled drivers build config 00:02:17.517 net/vmxnet3: not in enabled drivers build config 00:02:17.517 raw/*: missing internal dependency, "rawdev" 00:02:17.517 crypto/armv8: not in enabled drivers build config 00:02:17.517 crypto/bcmfs: not in enabled drivers build config 00:02:17.517 crypto/caam_jr: not in enabled drivers build config 00:02:17.517 crypto/ccp: not in enabled drivers build config 00:02:17.517 crypto/cnxk: not in enabled drivers build config 00:02:17.517 crypto/dpaa_sec: not in enabled drivers build config 00:02:17.517 crypto/dpaa2_sec: not in enabled drivers build config 00:02:17.517 crypto/ipsec_mb: not in enabled drivers build config 00:02:17.517 crypto/mlx5: not in enabled drivers build config 00:02:17.517 crypto/mvsam: not in enabled drivers build config 00:02:17.517 crypto/nitrox: not in enabled drivers build config 00:02:17.517 crypto/null: not in enabled drivers build config 00:02:17.517 crypto/octeontx: not in enabled drivers build config 00:02:17.517 crypto/openssl: not in enabled drivers build config 00:02:17.517 crypto/scheduler: not in enabled drivers build config 00:02:17.517 crypto/uadk: not in enabled drivers build config 00:02:17.517 crypto/virtio: not in enabled drivers build config 00:02:17.517 compress/isal: not in enabled drivers build config 00:02:17.517 compress/mlx5: not in enabled drivers build config 00:02:17.517 compress/octeontx: not in enabled drivers build config 00:02:17.517 compress/zlib: not in enabled drivers build config 00:02:17.517 regex/*: missing internal dependency, "regexdev" 00:02:17.517 ml/*: missing internal dependency, "mldev" 00:02:17.517 vdpa/ifc: not in enabled drivers build config 00:02:17.517 vdpa/mlx5: not in enabled drivers build config 00:02:17.517 vdpa/nfp: not in enabled drivers build config 00:02:17.517 vdpa/sfc: not in enabled drivers build config 00:02:17.517 event/*: missing internal dependency, "eventdev" 00:02:17.517 baseband/*: missing internal dependency, "bbdev" 00:02:17.517 gpu/*: missing internal dependency, "gpudev" 00:02:17.517 00:02:17.517 00:02:17.517 Build targets in project: 85 00:02:17.517 00:02:17.517 DPDK 23.11.0 00:02:17.517 00:02:17.517 User defined options 00:02:17.517 buildtype : debug 00:02:17.517 default_library : static 00:02:17.517 libdir : lib 00:02:17.517 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:17.517 c_args : -fPIC -Werror 00:02:17.517 c_link_args : 00:02:17.517 cpu_instruction_set: native 00:02:17.517 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:17.517 disable_libs : bbdev,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:17.517 enable_docs : false 00:02:17.517 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:17.517 enable_kmods : false 00:02:17.517 tests : false 00:02:17.517 00:02:17.517 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:17.783 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:02:17.783 [1/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:17.783 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:17.783 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:17.783 [4/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:17.783 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:17.783 [6/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:17.783 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:17.783 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:17.783 [9/265] Linking static target lib/librte_kvargs.a 00:02:17.783 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:17.783 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:17.783 [12/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:17.783 [13/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:17.783 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:17.783 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:17.783 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:17.783 [17/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:17.783 [18/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:17.783 [19/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:17.783 [20/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:17.783 [21/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:17.783 [22/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:17.783 [23/265] Linking static target lib/librte_log.a 00:02:17.783 [24/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:18.041 [25/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:18.041 [26/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:18.041 [27/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:18.041 [28/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:18.041 [29/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:18.041 [30/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:18.041 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:18.041 [32/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:18.041 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:18.041 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:18.041 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:18.041 [36/265] Linking static target lib/librte_pci.a 00:02:18.041 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:18.041 [38/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:18.041 [39/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:18.041 [40/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:18.041 [41/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:18.302 [42/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.302 [43/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.302 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:18.302 [45/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:18.302 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:18.302 [47/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:18.302 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:18.302 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:18.302 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:18.302 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:18.302 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:18.302 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:18.302 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:18.302 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:18.302 [56/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:18.302 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:18.302 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:18.302 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:18.302 [60/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:18.302 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:18.302 [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:18.302 [63/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:18.302 [64/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:18.302 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:18.302 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:18.302 [67/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:18.302 [68/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:18.302 [69/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:18.302 [70/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:18.302 [71/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:18.302 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:18.302 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:18.302 [74/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:18.302 [75/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:18.302 [76/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:18.302 [77/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:18.302 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:18.302 [79/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:18.302 [80/265] Linking static target lib/librte_telemetry.a 00:02:18.302 [81/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:18.303 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:18.303 [83/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:18.303 [84/265] Linking static target lib/librte_meter.a 00:02:18.303 [85/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:18.303 [86/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:18.561 [87/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:18.561 [88/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:18.561 [89/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:18.561 [90/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:18.561 [91/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:18.561 [92/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:18.561 [93/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:18.561 [94/265] Linking static target lib/librte_ring.a 00:02:18.561 [95/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:18.561 [96/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:18.561 [97/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:18.561 [98/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:18.561 [99/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:18.561 [100/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:18.561 [101/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:18.561 [102/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:18.561 [103/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:18.561 [104/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:18.561 [105/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:18.561 [106/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:18.561 [107/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:18.561 [108/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:18.561 [109/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:18.561 [110/265] Linking static target lib/librte_timer.a 00:02:18.561 [111/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:18.561 [112/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:18.561 [113/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:18.561 [114/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:18.561 [115/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:18.561 [116/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:18.561 [117/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:18.561 [118/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:18.561 [119/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:18.561 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:18.561 [121/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:18.561 [122/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:18.561 [123/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:18.561 [124/265] Linking static target lib/librte_cmdline.a 00:02:18.561 [125/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:18.561 [126/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.561 [127/265] Linking static target lib/librte_net.a 00:02:18.561 [128/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:18.561 [129/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:18.561 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:18.561 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:18.561 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:18.561 [133/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:18.561 [134/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:18.561 [135/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:18.561 [136/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:18.561 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:18.561 [138/265] Linking static target lib/librte_mempool.a 00:02:18.561 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:18.561 [140/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:18.561 [141/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:18.561 [142/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:18.561 [143/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:18.561 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:18.561 [145/265] Linking static target lib/librte_eal.a 00:02:18.561 [146/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:18.561 [147/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:18.561 [148/265] Linking static target lib/librte_dmadev.a 00:02:18.561 [149/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:18.561 [150/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:18.561 [151/265] Linking target lib/librte_log.so.24.0 00:02:18.561 [152/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:18.561 [153/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:18.561 [154/265] Linking static target lib/librte_rcu.a 00:02:18.561 [155/265] Linking static target lib/librte_reorder.a 00:02:18.561 [156/265] Linking static target lib/librte_compressdev.a 00:02:18.561 [157/265] Linking static target lib/librte_mbuf.a 00:02:18.561 [158/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:18.561 [159/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:18.561 [160/265] Linking static target lib/librte_power.a 00:02:18.562 [161/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:18.821 [162/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:18.821 [163/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.821 [164/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:18.821 [165/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:18.821 [166/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:18.821 [167/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:18.821 [168/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:18.821 [169/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:18.821 [170/265] Linking static target lib/librte_security.a 00:02:18.821 [171/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:18.821 [172/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.821 [173/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:18.821 [174/265] Linking static target lib/librte_hash.a 00:02:18.821 [175/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:18.821 [176/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:18.821 [177/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:18.821 [178/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:18.821 [179/265] Linking target lib/librte_kvargs.so.24.0 00:02:18.821 [180/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:18.821 [181/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:18.821 [182/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.821 [183/265] Linking static target lib/librte_cryptodev.a 00:02:18.821 [184/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:18.821 [185/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:18.821 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:18.821 [187/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:18.821 [188/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:18.821 [189/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:18.821 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:18.821 [191/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.821 [192/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:18.821 [193/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:18.821 [194/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:18.821 [195/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.081 [196/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:19.081 [197/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.081 [198/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.081 [199/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:19.081 [200/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:19.081 [201/265] Linking static target drivers/librte_bus_vdev.a 00:02:19.081 [202/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:19.081 [203/265] Linking target lib/librte_telemetry.so.24.0 00:02:19.081 [204/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:19.081 [205/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.081 [206/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:19.081 [207/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:19.081 [208/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:19.081 [209/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:19.081 [210/265] Linking static target drivers/librte_mempool_ring.a 00:02:19.081 [211/265] Linking static target drivers/librte_bus_pci.a 00:02:19.081 [212/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:19.081 [213/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:19.081 [214/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:19.340 [215/265] Linking static target lib/librte_ethdev.a 00:02:19.340 [216/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.340 [217/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.340 [218/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.340 [219/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.340 [220/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.600 [221/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.600 [222/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.859 [223/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.859 [224/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:19.859 [225/265] Linking static target lib/librte_vhost.a 00:02:19.859 [226/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.238 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.175 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.749 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.290 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.290 [231/265] Linking target lib/librte_eal.so.24.0 00:02:31.290 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:31.290 [233/265] Linking target lib/librte_timer.so.24.0 00:02:31.290 [234/265] Linking target lib/librte_meter.so.24.0 00:02:31.290 [235/265] Linking target lib/librte_dmadev.so.24.0 00:02:31.290 [236/265] Linking target lib/librte_ring.so.24.0 00:02:31.290 [237/265] Linking target lib/librte_pci.so.24.0 00:02:31.290 [238/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:31.290 [239/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:31.290 [240/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:31.290 [241/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:31.290 [242/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:31.290 [243/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:31.550 [244/265] Linking target lib/librte_rcu.so.24.0 00:02:31.550 [245/265] Linking target lib/librte_mempool.so.24.0 00:02:31.550 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:31.550 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:31.550 [248/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:31.550 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:31.550 [250/265] Linking target lib/librte_mbuf.so.24.0 00:02:31.809 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:31.809 [252/265] Linking target lib/librte_compressdev.so.24.0 00:02:31.809 [253/265] Linking target lib/librte_net.so.24.0 00:02:31.809 [254/265] Linking target lib/librte_reorder.so.24.0 00:02:31.809 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:02:32.068 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:32.068 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:32.068 [258/265] Linking target lib/librte_hash.so.24.0 00:02:32.068 [259/265] Linking target lib/librte_cmdline.so.24.0 00:02:32.068 [260/265] Linking target lib/librte_ethdev.so.24.0 00:02:32.068 [261/265] Linking target lib/librte_security.so.24.0 00:02:32.068 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:32.068 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:32.328 [264/265] Linking target lib/librte_power.so.24.0 00:02:32.328 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:32.328 INFO: autodetecting backend as ninja 00:02:32.328 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:33.268 CC lib/ut/ut.o 00:02:33.268 CC lib/log/log.o 00:02:33.268 CC lib/log/log_flags.o 00:02:33.268 CC lib/log/log_deprecated.o 00:02:33.268 CC lib/ut_mock/mock.o 00:02:33.268 LIB libspdk_ut.a 00:02:33.268 LIB libspdk_ut_mock.a 00:02:33.268 LIB libspdk_log.a 00:02:33.527 CXX lib/trace_parser/trace.o 00:02:33.787 CC lib/dma/dma.o 00:02:33.787 CC lib/ioat/ioat.o 00:02:33.787 CC lib/util/base64.o 00:02:33.787 CC lib/util/bit_array.o 00:02:33.787 CC lib/util/crc32.o 00:02:33.787 CC lib/util/cpuset.o 00:02:33.787 CC lib/util/crc16.o 00:02:33.787 CC lib/util/crc32c.o 00:02:33.787 CC lib/util/crc32_ieee.o 00:02:33.787 CC lib/util/crc64.o 00:02:33.787 CC lib/util/file.o 00:02:33.787 CC lib/util/dif.o 00:02:33.787 CC lib/util/fd.o 00:02:33.787 CC lib/util/hexlify.o 00:02:33.787 CC lib/util/pipe.o 00:02:33.787 CC lib/util/iov.o 00:02:33.787 CC lib/util/math.o 00:02:33.787 CC lib/util/strerror_tls.o 00:02:33.787 CC lib/util/string.o 00:02:33.787 CC lib/util/uuid.o 00:02:33.787 CC lib/util/fd_group.o 00:02:33.787 CC lib/util/xor.o 00:02:33.787 CC lib/util/zipf.o 00:02:33.787 CC lib/vfio_user/host/vfio_user.o 00:02:33.787 CC lib/vfio_user/host/vfio_user_pci.o 00:02:33.787 LIB libspdk_dma.a 00:02:33.787 LIB libspdk_ioat.a 00:02:34.047 LIB libspdk_vfio_user.a 00:02:34.047 LIB libspdk_util.a 00:02:34.047 LIB libspdk_trace_parser.a 00:02:34.306 CC lib/conf/conf.o 00:02:34.306 CC lib/idxd/idxd.o 00:02:34.306 CC lib/idxd/idxd_kernel.o 00:02:34.306 CC lib/idxd/idxd_user.o 00:02:34.306 CC lib/rdma/common.o 00:02:34.306 CC lib/rdma/rdma_verbs.o 00:02:34.306 CC lib/json/json_parse.o 00:02:34.306 CC lib/json/json_util.o 00:02:34.306 CC lib/json/json_write.o 00:02:34.306 CC lib/vmd/vmd.o 00:02:34.306 CC lib/vmd/led.o 00:02:34.306 CC lib/env_dpdk/memory.o 00:02:34.306 CC lib/env_dpdk/env.o 00:02:34.306 CC lib/env_dpdk/pci.o 00:02:34.306 CC lib/env_dpdk/init.o 00:02:34.306 CC lib/env_dpdk/threads.o 00:02:34.306 CC lib/env_dpdk/pci_ioat.o 00:02:34.306 CC lib/env_dpdk/pci_virtio.o 00:02:34.306 CC lib/env_dpdk/pci_vmd.o 00:02:34.306 CC lib/env_dpdk/pci_idxd.o 00:02:34.306 CC lib/env_dpdk/pci_event.o 00:02:34.306 CC lib/env_dpdk/sigbus_handler.o 00:02:34.306 CC lib/env_dpdk/pci_dpdk.o 00:02:34.306 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:34.306 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:34.306 LIB libspdk_conf.a 00:02:34.566 LIB libspdk_json.a 00:02:34.566 LIB libspdk_rdma.a 00:02:34.566 LIB libspdk_idxd.a 00:02:34.566 LIB libspdk_vmd.a 00:02:34.825 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:34.825 CC lib/jsonrpc/jsonrpc_server.o 00:02:34.825 CC lib/jsonrpc/jsonrpc_client.o 00:02:34.825 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:34.825 LIB libspdk_jsonrpc.a 00:02:35.083 CC lib/rpc/rpc.o 00:02:35.343 LIB libspdk_env_dpdk.a 00:02:35.343 LIB libspdk_rpc.a 00:02:35.602 CC lib/notify/notify.o 00:02:35.602 CC lib/notify/notify_rpc.o 00:02:35.602 CC lib/sock/sock.o 00:02:35.602 CC lib/sock/sock_rpc.o 00:02:35.602 CC lib/trace/trace.o 00:02:35.602 CC lib/trace/trace_flags.o 00:02:35.602 CC lib/trace/trace_rpc.o 00:02:35.602 LIB libspdk_notify.a 00:02:35.861 LIB libspdk_trace.a 00:02:35.861 LIB libspdk_sock.a 00:02:36.121 CC lib/thread/thread.o 00:02:36.121 CC lib/thread/iobuf.o 00:02:36.121 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:36.121 CC lib/nvme/nvme_ctrlr.o 00:02:36.121 CC lib/nvme/nvme_fabric.o 00:02:36.121 CC lib/nvme/nvme_ns_cmd.o 00:02:36.121 CC lib/nvme/nvme_pcie.o 00:02:36.121 CC lib/nvme/nvme_ns.o 00:02:36.121 CC lib/nvme/nvme_pcie_common.o 00:02:36.121 CC lib/nvme/nvme_qpair.o 00:02:36.121 CC lib/nvme/nvme.o 00:02:36.121 CC lib/nvme/nvme_quirks.o 00:02:36.121 CC lib/nvme/nvme_transport.o 00:02:36.121 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:36.121 CC lib/nvme/nvme_discovery.o 00:02:36.121 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:36.121 CC lib/nvme/nvme_tcp.o 00:02:36.121 CC lib/nvme/nvme_opal.o 00:02:36.121 CC lib/nvme/nvme_io_msg.o 00:02:36.121 CC lib/nvme/nvme_poll_group.o 00:02:36.121 CC lib/nvme/nvme_zns.o 00:02:36.121 CC lib/nvme/nvme_cuse.o 00:02:36.121 CC lib/nvme/nvme_vfio_user.o 00:02:36.121 CC lib/nvme/nvme_rdma.o 00:02:36.690 LIB libspdk_thread.a 00:02:36.950 CC lib/virtio/virtio_vhost_user.o 00:02:36.950 CC lib/virtio/virtio.o 00:02:36.950 CC lib/accel/accel_rpc.o 00:02:36.950 CC lib/accel/accel.o 00:02:36.950 CC lib/virtio/virtio_vfio_user.o 00:02:36.950 CC lib/virtio/virtio_pci.o 00:02:36.950 CC lib/accel/accel_sw.o 00:02:36.950 CC lib/init/subsystem.o 00:02:36.950 CC lib/init/json_config.o 00:02:36.950 CC lib/init/subsystem_rpc.o 00:02:36.950 CC lib/init/rpc.o 00:02:37.209 CC lib/vfu_tgt/tgt_endpoint.o 00:02:37.209 CC lib/vfu_tgt/tgt_rpc.o 00:02:37.209 CC lib/blob/blobstore.o 00:02:37.209 CC lib/blob/request.o 00:02:37.209 CC lib/blob/zeroes.o 00:02:37.209 CC lib/blob/blob_bs_dev.o 00:02:37.209 LIB libspdk_init.a 00:02:37.209 LIB libspdk_virtio.a 00:02:37.209 LIB libspdk_vfu_tgt.a 00:02:37.469 LIB libspdk_nvme.a 00:02:37.469 CC lib/event/app.o 00:02:37.469 CC lib/event/reactor.o 00:02:37.469 CC lib/event/log_rpc.o 00:02:37.469 CC lib/event/app_rpc.o 00:02:37.469 CC lib/event/scheduler_static.o 00:02:37.762 LIB libspdk_accel.a 00:02:37.762 LIB libspdk_event.a 00:02:38.066 CC lib/bdev/bdev.o 00:02:38.066 CC lib/bdev/part.o 00:02:38.066 CC lib/bdev/bdev_rpc.o 00:02:38.066 CC lib/bdev/bdev_zone.o 00:02:38.066 CC lib/bdev/scsi_nvme.o 00:02:38.653 LIB libspdk_blob.a 00:02:38.912 CC lib/lvol/lvol.o 00:02:38.912 CC lib/blobfs/blobfs.o 00:02:38.912 CC lib/blobfs/tree.o 00:02:39.481 LIB libspdk_lvol.a 00:02:39.481 LIB libspdk_blobfs.a 00:02:39.481 LIB libspdk_bdev.a 00:02:40.046 CC lib/ftl/ftl_core.o 00:02:40.046 CC lib/ftl/ftl_init.o 00:02:40.046 CC lib/ftl/ftl_layout.o 00:02:40.046 CC lib/ftl/ftl_debug.o 00:02:40.046 CC lib/ftl/ftl_io.o 00:02:40.046 CC lib/ftl/ftl_sb.o 00:02:40.046 CC lib/scsi/dev.o 00:02:40.046 CC lib/ftl/ftl_l2p.o 00:02:40.046 CC lib/scsi/lun.o 00:02:40.046 CC lib/ftl/ftl_band.o 00:02:40.046 CC lib/ftl/ftl_l2p_flat.o 00:02:40.046 CC lib/scsi/port.o 00:02:40.046 CC lib/ftl/ftl_nv_cache.o 00:02:40.046 CC lib/scsi/scsi.o 00:02:40.046 CC lib/ftl/ftl_band_ops.o 00:02:40.046 CC lib/scsi/scsi_bdev.o 00:02:40.046 CC lib/scsi/scsi_pr.o 00:02:40.046 CC lib/ftl/ftl_writer.o 00:02:40.046 CC lib/scsi/scsi_rpc.o 00:02:40.046 CC lib/ftl/ftl_rq.o 00:02:40.046 CC lib/scsi/task.o 00:02:40.046 CC lib/nvmf/ctrlr.o 00:02:40.046 CC lib/ftl/ftl_reloc.o 00:02:40.046 CC lib/nvmf/ctrlr_discovery.o 00:02:40.046 CC lib/ftl/ftl_l2p_cache.o 00:02:40.046 CC lib/ftl/ftl_p2l.o 00:02:40.046 CC lib/nvmf/ctrlr_bdev.o 00:02:40.047 CC lib/nvmf/subsystem.o 00:02:40.047 CC lib/ftl/mngt/ftl_mngt.o 00:02:40.047 CC lib/nvmf/nvmf.o 00:02:40.047 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:40.047 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:40.047 CC lib/nvmf/nvmf_rpc.o 00:02:40.047 CC lib/nvmf/transport.o 00:02:40.047 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:40.047 CC lib/nvmf/rdma.o 00:02:40.047 CC lib/nvmf/tcp.o 00:02:40.047 CC lib/nvmf/vfio_user.o 00:02:40.047 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:40.047 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:40.047 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:40.047 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:40.047 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:40.047 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:40.047 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:40.047 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:40.047 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:40.047 CC lib/ftl/utils/ftl_conf.o 00:02:40.047 CC lib/ftl/utils/ftl_md.o 00:02:40.047 CC lib/ftl/utils/ftl_mempool.o 00:02:40.047 CC lib/ftl/utils/ftl_property.o 00:02:40.047 CC lib/ftl/utils/ftl_bitmap.o 00:02:40.047 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:40.047 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:40.047 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:40.047 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:40.047 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:40.047 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:40.047 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:40.047 CC lib/nbd/nbd.o 00:02:40.047 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:40.047 CC lib/ftl/base/ftl_base_dev.o 00:02:40.047 CC lib/nbd/nbd_rpc.o 00:02:40.047 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:40.047 CC lib/ftl/base/ftl_base_bdev.o 00:02:40.047 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:40.047 CC lib/ftl/ftl_trace.o 00:02:40.047 CC lib/ublk/ublk.o 00:02:40.047 CC lib/ublk/ublk_rpc.o 00:02:40.305 LIB libspdk_scsi.a 00:02:40.305 LIB libspdk_nbd.a 00:02:40.563 LIB libspdk_ublk.a 00:02:40.563 LIB libspdk_ftl.a 00:02:40.563 CC lib/vhost/vhost.o 00:02:40.563 CC lib/vhost/vhost_scsi.o 00:02:40.563 CC lib/vhost/vhost_rpc.o 00:02:40.563 CC lib/vhost/vhost_blk.o 00:02:40.563 CC lib/vhost/rte_vhost_user.o 00:02:40.563 CC lib/iscsi/conn.o 00:02:40.563 CC lib/iscsi/init_grp.o 00:02:40.563 CC lib/iscsi/iscsi.o 00:02:40.563 CC lib/iscsi/md5.o 00:02:40.563 CC lib/iscsi/param.o 00:02:40.563 CC lib/iscsi/portal_grp.o 00:02:40.563 CC lib/iscsi/tgt_node.o 00:02:40.563 CC lib/iscsi/iscsi_subsystem.o 00:02:40.563 CC lib/iscsi/iscsi_rpc.o 00:02:40.563 CC lib/iscsi/task.o 00:02:41.128 LIB libspdk_nvmf.a 00:02:41.128 LIB libspdk_vhost.a 00:02:41.407 LIB libspdk_iscsi.a 00:02:41.667 CC module/vfu_device/vfu_virtio_rpc.o 00:02:41.667 CC module/vfu_device/vfu_virtio.o 00:02:41.667 CC module/vfu_device/vfu_virtio_blk.o 00:02:41.667 CC module/vfu_device/vfu_virtio_scsi.o 00:02:41.667 CC module/env_dpdk/env_dpdk_rpc.o 00:02:41.925 LIB libspdk_env_dpdk_rpc.a 00:02:41.925 CC module/accel/iaa/accel_iaa_rpc.o 00:02:41.925 CC module/accel/iaa/accel_iaa.o 00:02:41.925 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:41.925 CC module/accel/error/accel_error.o 00:02:41.925 CC module/accel/error/accel_error_rpc.o 00:02:41.925 CC module/blob/bdev/blob_bdev.o 00:02:41.925 CC module/accel/ioat/accel_ioat.o 00:02:41.925 CC module/accel/ioat/accel_ioat_rpc.o 00:02:41.925 CC module/sock/posix/posix.o 00:02:41.925 CC module/scheduler/gscheduler/gscheduler.o 00:02:41.925 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:41.925 CC module/accel/dsa/accel_dsa.o 00:02:41.925 CC module/accel/dsa/accel_dsa_rpc.o 00:02:41.925 LIB libspdk_scheduler_dpdk_governor.a 00:02:41.925 LIB libspdk_scheduler_gscheduler.a 00:02:41.925 LIB libspdk_accel_error.a 00:02:41.925 LIB libspdk_accel_iaa.a 00:02:41.925 LIB libspdk_accel_ioat.a 00:02:41.925 LIB libspdk_scheduler_dynamic.a 00:02:41.925 LIB libspdk_blob_bdev.a 00:02:42.185 LIB libspdk_accel_dsa.a 00:02:42.185 LIB libspdk_vfu_device.a 00:02:42.185 LIB libspdk_sock_posix.a 00:02:42.443 CC module/bdev/gpt/gpt.o 00:02:42.443 CC module/bdev/gpt/vbdev_gpt.o 00:02:42.443 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:42.443 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:42.443 CC module/bdev/lvol/vbdev_lvol.o 00:02:42.443 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:42.443 CC module/bdev/raid/bdev_raid_sb.o 00:02:42.443 CC module/bdev/raid/bdev_raid.o 00:02:42.443 CC module/bdev/raid/bdev_raid_rpc.o 00:02:42.443 CC module/bdev/raid/raid0.o 00:02:42.443 CC module/bdev/raid/raid1.o 00:02:42.443 CC module/bdev/aio/bdev_aio.o 00:02:42.443 CC module/bdev/iscsi/bdev_iscsi.o 00:02:42.443 CC module/bdev/aio/bdev_aio_rpc.o 00:02:42.443 CC module/bdev/delay/vbdev_delay.o 00:02:42.443 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:42.443 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:42.443 CC module/bdev/raid/concat.o 00:02:42.443 CC module/bdev/error/vbdev_error.o 00:02:42.443 CC module/bdev/error/vbdev_error_rpc.o 00:02:42.443 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:42.443 CC module/bdev/passthru/vbdev_passthru.o 00:02:42.443 CC module/bdev/split/vbdev_split.o 00:02:42.443 CC module/bdev/split/vbdev_split_rpc.o 00:02:42.443 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:42.443 CC module/blobfs/bdev/blobfs_bdev.o 00:02:42.443 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:42.443 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:42.443 CC module/bdev/nvme/bdev_nvme.o 00:02:42.443 CC module/bdev/malloc/bdev_malloc.o 00:02:42.443 CC module/bdev/nvme/nvme_rpc.o 00:02:42.443 CC module/bdev/nvme/bdev_mdns_client.o 00:02:42.443 CC module/bdev/nvme/vbdev_opal.o 00:02:42.443 CC module/bdev/null/bdev_null.o 00:02:42.443 CC module/bdev/null/bdev_null_rpc.o 00:02:42.443 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:42.443 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:42.443 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:42.443 CC module/bdev/ftl/bdev_ftl.o 00:02:42.443 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:42.443 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:42.443 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:42.702 LIB libspdk_blobfs_bdev.a 00:02:42.702 LIB libspdk_bdev_gpt.a 00:02:42.702 LIB libspdk_bdev_split.a 00:02:42.702 LIB libspdk_bdev_error.a 00:02:42.702 LIB libspdk_bdev_null.a 00:02:42.702 LIB libspdk_bdev_aio.a 00:02:42.702 LIB libspdk_bdev_zone_block.a 00:02:42.702 LIB libspdk_bdev_passthru.a 00:02:42.702 LIB libspdk_bdev_ftl.a 00:02:42.702 LIB libspdk_bdev_iscsi.a 00:02:42.702 LIB libspdk_bdev_delay.a 00:02:42.702 LIB libspdk_bdev_malloc.a 00:02:42.702 LIB libspdk_bdev_lvol.a 00:02:42.702 LIB libspdk_bdev_virtio.a 00:02:42.961 LIB libspdk_bdev_raid.a 00:02:43.528 LIB libspdk_bdev_nvme.a 00:02:44.096 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:44.096 CC module/event/subsystems/vmd/vmd.o 00:02:44.096 CC module/event/subsystems/sock/sock.o 00:02:44.096 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:44.096 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:44.096 CC module/event/subsystems/scheduler/scheduler.o 00:02:44.096 CC module/event/subsystems/iobuf/iobuf.o 00:02:44.096 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:44.355 LIB libspdk_event_vmd.a 00:02:44.355 LIB libspdk_event_sock.a 00:02:44.355 LIB libspdk_event_vhost_blk.a 00:02:44.355 LIB libspdk_event_vfu_tgt.a 00:02:44.355 LIB libspdk_event_scheduler.a 00:02:44.355 LIB libspdk_event_iobuf.a 00:02:44.614 CC module/event/subsystems/accel/accel.o 00:02:44.614 LIB libspdk_event_accel.a 00:02:44.873 CC module/event/subsystems/bdev/bdev.o 00:02:45.132 LIB libspdk_event_bdev.a 00:02:45.391 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:45.391 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:45.391 CC module/event/subsystems/nbd/nbd.o 00:02:45.391 CC module/event/subsystems/scsi/scsi.o 00:02:45.391 CC module/event/subsystems/ublk/ublk.o 00:02:45.391 LIB libspdk_event_nbd.a 00:02:45.391 LIB libspdk_event_scsi.a 00:02:45.391 LIB libspdk_event_ublk.a 00:02:45.391 LIB libspdk_event_nvmf.a 00:02:45.650 CC module/event/subsystems/iscsi/iscsi.o 00:02:45.909 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:45.909 LIB libspdk_event_iscsi.a 00:02:45.909 LIB libspdk_event_vhost_scsi.a 00:02:46.168 CC app/trace_record/trace_record.o 00:02:46.168 CC app/spdk_nvme_perf/perf.o 00:02:46.168 CC app/spdk_nvme_identify/identify.o 00:02:46.168 CC app/spdk_nvme_discover/discovery_aer.o 00:02:46.168 CXX app/trace/trace.o 00:02:46.168 CC app/iscsi_tgt/iscsi_tgt.o 00:02:46.168 CC app/spdk_lspci/spdk_lspci.o 00:02:46.168 CC app/spdk_top/spdk_top.o 00:02:46.168 TEST_HEADER include/spdk/accel.h 00:02:46.168 TEST_HEADER include/spdk/accel_module.h 00:02:46.168 TEST_HEADER include/spdk/barrier.h 00:02:46.168 TEST_HEADER include/spdk/assert.h 00:02:46.168 TEST_HEADER include/spdk/base64.h 00:02:46.168 TEST_HEADER include/spdk/bdev.h 00:02:46.168 TEST_HEADER include/spdk/bdev_module.h 00:02:46.168 TEST_HEADER include/spdk/bit_array.h 00:02:46.168 TEST_HEADER include/spdk/bdev_zone.h 00:02:46.168 TEST_HEADER include/spdk/bit_pool.h 00:02:46.168 TEST_HEADER include/spdk/blob_bdev.h 00:02:46.168 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:46.168 TEST_HEADER include/spdk/blobfs.h 00:02:46.168 TEST_HEADER include/spdk/blob.h 00:02:46.168 TEST_HEADER include/spdk/conf.h 00:02:46.168 TEST_HEADER include/spdk/config.h 00:02:46.168 TEST_HEADER include/spdk/cpuset.h 00:02:46.168 TEST_HEADER include/spdk/crc16.h 00:02:46.168 TEST_HEADER include/spdk/crc32.h 00:02:46.168 CC test/rpc_client/rpc_client_test.o 00:02:46.168 TEST_HEADER include/spdk/crc64.h 00:02:46.168 TEST_HEADER include/spdk/dif.h 00:02:46.168 TEST_HEADER include/spdk/dma.h 00:02:46.168 TEST_HEADER include/spdk/endian.h 00:02:46.168 TEST_HEADER include/spdk/env_dpdk.h 00:02:46.168 CC app/vhost/vhost.o 00:02:46.168 TEST_HEADER include/spdk/env.h 00:02:46.168 TEST_HEADER include/spdk/fd_group.h 00:02:46.168 TEST_HEADER include/spdk/event.h 00:02:46.168 CC app/spdk_dd/spdk_dd.o 00:02:46.168 TEST_HEADER include/spdk/fd.h 00:02:46.168 TEST_HEADER include/spdk/file.h 00:02:46.168 TEST_HEADER include/spdk/ftl.h 00:02:46.168 TEST_HEADER include/spdk/gpt_spec.h 00:02:46.168 TEST_HEADER include/spdk/hexlify.h 00:02:46.168 TEST_HEADER include/spdk/histogram_data.h 00:02:46.168 TEST_HEADER include/spdk/idxd.h 00:02:46.168 CC app/nvmf_tgt/nvmf_main.o 00:02:46.168 TEST_HEADER include/spdk/idxd_spec.h 00:02:46.168 TEST_HEADER include/spdk/init.h 00:02:46.168 TEST_HEADER include/spdk/ioat.h 00:02:46.168 TEST_HEADER include/spdk/ioat_spec.h 00:02:46.168 TEST_HEADER include/spdk/iscsi_spec.h 00:02:46.168 TEST_HEADER include/spdk/json.h 00:02:46.168 TEST_HEADER include/spdk/jsonrpc.h 00:02:46.168 TEST_HEADER include/spdk/likely.h 00:02:46.168 TEST_HEADER include/spdk/lvol.h 00:02:46.168 TEST_HEADER include/spdk/log.h 00:02:46.168 TEST_HEADER include/spdk/memory.h 00:02:46.168 TEST_HEADER include/spdk/mmio.h 00:02:46.168 TEST_HEADER include/spdk/nbd.h 00:02:46.168 TEST_HEADER include/spdk/notify.h 00:02:46.168 TEST_HEADER include/spdk/nvme.h 00:02:46.168 TEST_HEADER include/spdk/nvme_intel.h 00:02:46.168 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:46.169 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:46.169 TEST_HEADER include/spdk/nvme_spec.h 00:02:46.169 TEST_HEADER include/spdk/nvme_zns.h 00:02:46.169 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:46.169 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:46.169 TEST_HEADER include/spdk/nvmf.h 00:02:46.169 TEST_HEADER include/spdk/nvmf_transport.h 00:02:46.169 TEST_HEADER include/spdk/nvmf_spec.h 00:02:46.169 TEST_HEADER include/spdk/opal.h 00:02:46.169 TEST_HEADER include/spdk/opal_spec.h 00:02:46.169 CC app/spdk_tgt/spdk_tgt.o 00:02:46.169 TEST_HEADER include/spdk/pci_ids.h 00:02:46.169 TEST_HEADER include/spdk/pipe.h 00:02:46.169 TEST_HEADER include/spdk/reduce.h 00:02:46.169 TEST_HEADER include/spdk/queue.h 00:02:46.169 TEST_HEADER include/spdk/scheduler.h 00:02:46.169 TEST_HEADER include/spdk/rpc.h 00:02:46.169 TEST_HEADER include/spdk/scsi.h 00:02:46.169 TEST_HEADER include/spdk/scsi_spec.h 00:02:46.169 TEST_HEADER include/spdk/sock.h 00:02:46.169 TEST_HEADER include/spdk/stdinc.h 00:02:46.169 TEST_HEADER include/spdk/string.h 00:02:46.169 TEST_HEADER include/spdk/thread.h 00:02:46.169 TEST_HEADER include/spdk/trace.h 00:02:46.169 TEST_HEADER include/spdk/trace_parser.h 00:02:46.169 TEST_HEADER include/spdk/tree.h 00:02:46.169 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:46.169 TEST_HEADER include/spdk/ublk.h 00:02:46.169 TEST_HEADER include/spdk/util.h 00:02:46.169 TEST_HEADER include/spdk/version.h 00:02:46.169 TEST_HEADER include/spdk/uuid.h 00:02:46.169 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:46.169 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:46.169 TEST_HEADER include/spdk/vmd.h 00:02:46.169 TEST_HEADER include/spdk/vhost.h 00:02:46.169 TEST_HEADER include/spdk/zipf.h 00:02:46.169 TEST_HEADER include/spdk/xor.h 00:02:46.169 CXX test/cpp_headers/accel_module.o 00:02:46.169 CXX test/cpp_headers/accel.o 00:02:46.437 CXX test/cpp_headers/assert.o 00:02:46.437 CXX test/cpp_headers/barrier.o 00:02:46.437 CXX test/cpp_headers/base64.o 00:02:46.437 CXX test/cpp_headers/bdev.o 00:02:46.437 CXX test/cpp_headers/bdev_zone.o 00:02:46.437 CXX test/cpp_headers/bdev_module.o 00:02:46.437 CXX test/cpp_headers/bit_array.o 00:02:46.437 CXX test/cpp_headers/bit_pool.o 00:02:46.437 CXX test/cpp_headers/blob_bdev.o 00:02:46.437 CXX test/cpp_headers/blobfs.o 00:02:46.437 CXX test/cpp_headers/blobfs_bdev.o 00:02:46.437 CXX test/cpp_headers/blob.o 00:02:46.437 CC app/fio/nvme/fio_plugin.o 00:02:46.437 CXX test/cpp_headers/conf.o 00:02:46.437 CXX test/cpp_headers/config.o 00:02:46.437 CXX test/cpp_headers/cpuset.o 00:02:46.437 CXX test/cpp_headers/crc16.o 00:02:46.437 CXX test/cpp_headers/crc32.o 00:02:46.437 CXX test/cpp_headers/dif.o 00:02:46.437 CXX test/cpp_headers/crc64.o 00:02:46.437 CXX test/cpp_headers/endian.o 00:02:46.437 CXX test/cpp_headers/dma.o 00:02:46.437 CXX test/cpp_headers/env_dpdk.o 00:02:46.437 CXX test/cpp_headers/env.o 00:02:46.437 CXX test/cpp_headers/event.o 00:02:46.437 CXX test/cpp_headers/fd_group.o 00:02:46.437 CXX test/cpp_headers/fd.o 00:02:46.437 CXX test/cpp_headers/file.o 00:02:46.437 CXX test/cpp_headers/gpt_spec.o 00:02:46.437 CXX test/cpp_headers/ftl.o 00:02:46.437 CXX test/cpp_headers/hexlify.o 00:02:46.437 CXX test/cpp_headers/histogram_data.o 00:02:46.437 CXX test/cpp_headers/idxd.o 00:02:46.437 CXX test/cpp_headers/idxd_spec.o 00:02:46.437 CXX test/cpp_headers/init.o 00:02:46.438 CC test/nvme/aer/aer.o 00:02:46.438 CC test/nvme/reset/reset.o 00:02:46.438 CC test/nvme/err_injection/err_injection.o 00:02:46.438 CC test/nvme/reserve/reserve.o 00:02:46.438 CC test/nvme/cuse/cuse.o 00:02:46.438 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:46.438 CC test/nvme/simple_copy/simple_copy.o 00:02:46.438 CC test/nvme/sgl/sgl.o 00:02:46.438 CC test/nvme/overhead/overhead.o 00:02:46.438 CC examples/idxd/perf/perf.o 00:02:46.438 CC test/nvme/connect_stress/connect_stress.o 00:02:46.438 CC test/nvme/e2edp/nvme_dp.o 00:02:46.438 CC examples/ioat/perf/perf.o 00:02:46.438 CC test/nvme/compliance/nvme_compliance.o 00:02:46.438 CC examples/nvme/hello_world/hello_world.o 00:02:46.438 CC examples/sock/hello_world/hello_sock.o 00:02:46.438 CC test/thread/lock/spdk_lock.o 00:02:46.438 CC examples/util/zipf/zipf.o 00:02:46.438 CC test/nvme/startup/startup.o 00:02:46.438 CC test/nvme/fused_ordering/fused_ordering.o 00:02:46.438 CC test/nvme/boot_partition/boot_partition.o 00:02:46.438 CC examples/nvme/arbitration/arbitration.o 00:02:46.438 CC examples/ioat/verify/verify.o 00:02:46.438 CC examples/nvme/reconnect/reconnect.o 00:02:46.438 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:46.438 CC test/app/stub/stub.o 00:02:46.438 CC test/nvme/fdp/fdp.o 00:02:46.438 CC examples/nvme/hotplug/hotplug.o 00:02:46.438 CC test/app/histogram_perf/histogram_perf.o 00:02:46.438 CC test/thread/poller_perf/poller_perf.o 00:02:46.438 CC app/fio/bdev/fio_plugin.o 00:02:46.438 CXX test/cpp_headers/ioat.o 00:02:46.438 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:46.438 CC test/event/reactor/reactor.o 00:02:46.438 LINK spdk_lspci 00:02:46.438 CC examples/nvme/abort/abort.o 00:02:46.438 CC test/app/jsoncat/jsoncat.o 00:02:46.438 CC test/event/reactor_perf/reactor_perf.o 00:02:46.438 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:46.438 CC test/event/event_perf/event_perf.o 00:02:46.438 CC examples/vmd/lsvmd/lsvmd.o 00:02:46.438 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:46.438 CC test/env/memory/memory_ut.o 00:02:46.438 CC examples/accel/perf/accel_perf.o 00:02:46.438 CC examples/vmd/led/led.o 00:02:46.438 CC test/env/vtophys/vtophys.o 00:02:46.438 CC test/env/pci/pci_ut.o 00:02:46.438 CC test/event/app_repeat/app_repeat.o 00:02:46.438 CC test/blobfs/mkfs/mkfs.o 00:02:46.438 CC test/bdev/bdevio/bdevio.o 00:02:46.438 CC examples/blob/hello_world/hello_blob.o 00:02:46.438 CC examples/blob/cli/blobcli.o 00:02:46.438 CC examples/nvmf/nvmf/nvmf.o 00:02:46.438 CC test/app/bdev_svc/bdev_svc.o 00:02:46.438 CC examples/thread/thread/thread_ex.o 00:02:46.438 CC test/dma/test_dma/test_dma.o 00:02:46.438 CC test/accel/dif/dif.o 00:02:46.438 CC examples/bdev/hello_world/hello_bdev.o 00:02:46.438 CC examples/bdev/bdevperf/bdevperf.o 00:02:46.438 CC test/event/scheduler/scheduler.o 00:02:46.438 LINK spdk_nvme_discover 00:02:46.438 LINK spdk_trace_record 00:02:46.438 LINK rpc_client_test 00:02:46.438 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:46.438 CC test/env/mem_callbacks/mem_callbacks.o 00:02:46.438 CC test/lvol/esnap/esnap.o 00:02:46.438 LINK nvmf_tgt 00:02:46.438 LINK vhost 00:02:46.438 LINK iscsi_tgt 00:02:46.438 CXX test/cpp_headers/ioat_spec.o 00:02:46.438 CXX test/cpp_headers/iscsi_spec.o 00:02:46.438 LINK interrupt_tgt 00:02:46.438 CXX test/cpp_headers/json.o 00:02:46.438 CXX test/cpp_headers/jsonrpc.o 00:02:46.438 CXX test/cpp_headers/likely.o 00:02:46.438 CXX test/cpp_headers/log.o 00:02:46.438 CXX test/cpp_headers/lvol.o 00:02:46.438 CXX test/cpp_headers/memory.o 00:02:46.438 CXX test/cpp_headers/mmio.o 00:02:46.438 CXX test/cpp_headers/nbd.o 00:02:46.438 LINK jsoncat 00:02:46.438 CXX test/cpp_headers/notify.o 00:02:46.438 CXX test/cpp_headers/nvme.o 00:02:46.438 CXX test/cpp_headers/nvme_intel.o 00:02:46.438 CXX test/cpp_headers/nvme_ocssd.o 00:02:46.438 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:46.438 CXX test/cpp_headers/nvme_spec.o 00:02:46.438 CXX test/cpp_headers/nvme_zns.o 00:02:46.438 LINK lsvmd 00:02:46.438 CXX test/cpp_headers/nvmf_cmd.o 00:02:46.438 LINK reactor 00:02:46.438 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:46.438 CXX test/cpp_headers/nvmf.o 00:02:46.438 CXX test/cpp_headers/nvmf_spec.o 00:02:46.438 CXX test/cpp_headers/nvmf_transport.o 00:02:46.438 LINK zipf 00:02:46.438 CXX test/cpp_headers/opal.o 00:02:46.438 CXX test/cpp_headers/opal_spec.o 00:02:46.438 CXX test/cpp_headers/pci_ids.o 00:02:46.701 LINK spdk_tgt 00:02:46.701 LINK poller_perf 00:02:46.701 LINK histogram_perf 00:02:46.701 LINK reactor_perf 00:02:46.701 CXX test/cpp_headers/pipe.o 00:02:46.701 LINK doorbell_aers 00:02:46.701 CXX test/cpp_headers/queue.o 00:02:46.701 LINK event_perf 00:02:46.701 LINK connect_stress 00:02:46.701 LINK led 00:02:46.701 LINK err_injection 00:02:46.701 LINK env_dpdk_post_init 00:02:46.701 LINK vtophys 00:02:46.701 CXX test/cpp_headers/reduce.o 00:02:46.701 CXX test/cpp_headers/rpc.o 00:02:46.701 CXX test/cpp_headers/scheduler.o 00:02:46.701 CXX test/cpp_headers/scsi.o 00:02:46.701 LINK boot_partition 00:02:46.701 LINK startup 00:02:46.701 LINK app_repeat 00:02:46.701 CXX test/cpp_headers/scsi_spec.o 00:02:46.701 LINK reserve 00:02:46.701 CXX test/cpp_headers/sock.o 00:02:46.701 LINK pmr_persistence 00:02:46.701 LINK stub 00:02:46.701 LINK cmb_copy 00:02:46.701 LINK ioat_perf 00:02:46.701 CXX test/cpp_headers/stdinc.o 00:02:46.701 LINK verify 00:02:46.701 LINK fused_ordering 00:02:46.701 LINK hello_sock 00:02:46.701 LINK hello_world 00:02:46.701 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:46.701 LINK bdev_svc 00:02:46.701 LINK hotplug 00:02:46.701 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:46.701 LINK aer 00:02:46.701 LINK simple_copy 00:02:46.701 LINK mkfs 00:02:46.701 LINK sgl 00:02:46.701 LINK fdp 00:02:46.701 LINK spdk_trace 00:02:46.701 LINK hello_blob 00:02:46.701 LINK overhead 00:02:46.701 LINK nvme_dp 00:02:46.701 LINK reset 00:02:46.701 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:46.701 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:46.701 LINK hello_bdev 00:02:46.701 LINK thread 00:02:46.701 LINK scheduler 00:02:46.701 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:46.701 CXX test/cpp_headers/string.o 00:02:46.701 CXX test/cpp_headers/thread.o 00:02:46.701 CXX test/cpp_headers/trace.o 00:02:46.701 CXX test/cpp_headers/trace_parser.o 00:02:46.701 CXX test/cpp_headers/tree.o 00:02:46.701 CXX test/cpp_headers/ublk.o 00:02:46.701 LINK reconnect 00:02:46.701 CXX test/cpp_headers/util.o 00:02:46.701 CXX test/cpp_headers/uuid.o 00:02:46.701 CXX test/cpp_headers/version.o 00:02:46.701 CXX test/cpp_headers/vfio_user_pci.o 00:02:46.701 CXX test/cpp_headers/vfio_user_spec.o 00:02:46.701 CXX test/cpp_headers/vhost.o 00:02:46.701 CXX test/cpp_headers/vmd.o 00:02:46.701 LINK spdk_dd 00:02:46.701 LINK nvmf 00:02:46.701 CXX test/cpp_headers/xor.o 00:02:46.701 CXX test/cpp_headers/zipf.o 00:02:46.963 LINK idxd_perf 00:02:46.963 LINK arbitration 00:02:46.963 LINK abort 00:02:46.963 LINK test_dma 00:02:46.963 LINK nvme_manage 00:02:46.963 LINK nvme_compliance 00:02:46.963 LINK pci_ut 00:02:46.963 LINK dif 00:02:46.963 LINK bdevio 00:02:46.963 LINK accel_perf 00:02:46.963 LINK blobcli 00:02:46.963 LINK llvm_vfio_fuzz 00:02:46.963 LINK spdk_bdev 00:02:47.221 LINK spdk_nvme 00:02:47.221 LINK nvme_fuzz 00:02:47.221 LINK mem_callbacks 00:02:47.221 LINK spdk_nvme_identify 00:02:47.221 LINK spdk_nvme_perf 00:02:47.221 LINK bdevperf 00:02:47.221 LINK vhost_fuzz 00:02:47.221 LINK memory_ut 00:02:47.479 LINK spdk_top 00:02:47.479 LINK cuse 00:02:47.479 LINK llvm_nvme_fuzz 00:02:47.736 LINK spdk_lock 00:02:47.994 LINK iscsi_fuzz 00:02:49.899 LINK esnap 00:02:50.157 00:02:50.157 real 0m40.745s 00:02:50.157 user 5m42.743s 00:02:50.157 sys 2m47.763s 00:02:50.157 17:12:09 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:50.157 17:12:09 -- common/autotest_common.sh@10 -- $ set +x 00:02:50.157 ************************************ 00:02:50.157 END TEST make 00:02:50.157 ************************************ 00:02:50.416 17:12:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:02:50.416 17:12:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:02:50.416 17:12:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:02:50.416 17:12:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:02:50.416 17:12:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:02:50.416 17:12:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:02:50.416 17:12:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:02:50.416 17:12:09 -- scripts/common.sh@335 -- # IFS=.-: 00:02:50.416 17:12:09 -- scripts/common.sh@335 -- # read -ra ver1 00:02:50.416 17:12:09 -- scripts/common.sh@336 -- # IFS=.-: 00:02:50.416 17:12:09 -- scripts/common.sh@336 -- # read -ra ver2 00:02:50.416 17:12:09 -- scripts/common.sh@337 -- # local 'op=<' 00:02:50.416 17:12:09 -- scripts/common.sh@339 -- # ver1_l=2 00:02:50.416 17:12:09 -- scripts/common.sh@340 -- # ver2_l=1 00:02:50.416 17:12:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:02:50.416 17:12:09 -- scripts/common.sh@343 -- # case "$op" in 00:02:50.416 17:12:09 -- scripts/common.sh@344 -- # : 1 00:02:50.416 17:12:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:02:50.416 17:12:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:50.416 17:12:10 -- scripts/common.sh@364 -- # decimal 1 00:02:50.416 17:12:10 -- scripts/common.sh@352 -- # local d=1 00:02:50.416 17:12:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:50.416 17:12:10 -- scripts/common.sh@354 -- # echo 1 00:02:50.416 17:12:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:02:50.416 17:12:10 -- scripts/common.sh@365 -- # decimal 2 00:02:50.416 17:12:10 -- scripts/common.sh@352 -- # local d=2 00:02:50.416 17:12:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:50.416 17:12:10 -- scripts/common.sh@354 -- # echo 2 00:02:50.416 17:12:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:02:50.417 17:12:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:02:50.417 17:12:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:02:50.417 17:12:10 -- scripts/common.sh@367 -- # return 0 00:02:50.417 17:12:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:50.417 17:12:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:02:50.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:50.417 --rc genhtml_branch_coverage=1 00:02:50.417 --rc genhtml_function_coverage=1 00:02:50.417 --rc genhtml_legend=1 00:02:50.417 --rc geninfo_all_blocks=1 00:02:50.417 --rc geninfo_unexecuted_blocks=1 00:02:50.417 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:50.417 ' 00:02:50.417 17:12:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:02:50.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:50.417 --rc genhtml_branch_coverage=1 00:02:50.417 --rc genhtml_function_coverage=1 00:02:50.417 --rc genhtml_legend=1 00:02:50.417 --rc geninfo_all_blocks=1 00:02:50.417 --rc geninfo_unexecuted_blocks=1 00:02:50.417 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:50.417 ' 00:02:50.417 17:12:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:02:50.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:50.417 --rc genhtml_branch_coverage=1 00:02:50.417 --rc genhtml_function_coverage=1 00:02:50.417 --rc genhtml_legend=1 00:02:50.417 --rc geninfo_all_blocks=1 00:02:50.417 --rc geninfo_unexecuted_blocks=1 00:02:50.417 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:50.417 ' 00:02:50.417 17:12:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:02:50.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:50.417 --rc genhtml_branch_coverage=1 00:02:50.417 --rc genhtml_function_coverage=1 00:02:50.417 --rc genhtml_legend=1 00:02:50.417 --rc geninfo_all_blocks=1 00:02:50.417 --rc geninfo_unexecuted_blocks=1 00:02:50.417 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:50.417 ' 00:02:50.417 17:12:10 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:50.417 17:12:10 -- nvmf/common.sh@7 -- # uname -s 00:02:50.417 17:12:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:50.417 17:12:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:50.417 17:12:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:50.417 17:12:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:50.417 17:12:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:50.417 17:12:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:50.417 17:12:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:50.417 17:12:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:50.417 17:12:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:50.417 17:12:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:50.417 17:12:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:02:50.417 17:12:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:02:50.417 17:12:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:50.417 17:12:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:50.417 17:12:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:50.417 17:12:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:50.417 17:12:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:50.417 17:12:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:50.417 17:12:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:50.417 17:12:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.417 17:12:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.417 17:12:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.417 17:12:10 -- paths/export.sh@5 -- # export PATH 00:02:50.417 17:12:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.417 17:12:10 -- nvmf/common.sh@46 -- # : 0 00:02:50.417 17:12:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:50.417 17:12:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:50.417 17:12:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:50.417 17:12:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:50.417 17:12:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:50.417 17:12:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:50.417 17:12:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:50.417 17:12:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:50.417 17:12:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:50.417 17:12:10 -- spdk/autotest.sh@32 -- # uname -s 00:02:50.417 17:12:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:50.417 17:12:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:50.417 17:12:10 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:50.417 17:12:10 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:50.417 17:12:10 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:50.417 17:12:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:50.417 17:12:10 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:50.417 17:12:10 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:50.417 17:12:10 -- spdk/autotest.sh@48 -- # udevadm_pid=292179 00:02:50.417 17:12:10 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:50.417 17:12:10 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:02:50.417 17:12:10 -- spdk/autotest.sh@54 -- # echo 292181 00:02:50.417 17:12:10 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:02:50.417 17:12:10 -- spdk/autotest.sh@56 -- # echo 292182 00:02:50.417 17:12:10 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:02:50.417 17:12:10 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:50.417 17:12:10 -- spdk/autotest.sh@60 -- # echo 292183 00:02:50.417 17:12:10 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l 00:02:50.417 17:12:10 -- spdk/autotest.sh@62 -- # echo 292184 00:02:50.417 17:12:10 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l 00:02:50.417 17:12:10 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:50.417 17:12:10 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:50.417 17:12:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:50.417 17:12:10 -- common/autotest_common.sh@10 -- # set +x 00:02:50.417 17:12:10 -- spdk/autotest.sh@70 -- # create_test_list 00:02:50.417 17:12:10 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:50.417 17:12:10 -- common/autotest_common.sh@10 -- # set +x 00:02:50.417 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:50.417 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:50.417 17:12:10 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:50.417 17:12:10 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:50.417 17:12:10 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:50.417 17:12:10 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:50.417 17:12:10 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:50.417 17:12:10 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:50.417 17:12:10 -- common/autotest_common.sh@1450 -- # uname 00:02:50.417 17:12:10 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:02:50.417 17:12:10 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:50.417 17:12:10 -- common/autotest_common.sh@1470 -- # uname 00:02:50.417 17:12:10 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:02:50.417 17:12:10 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:02:50.417 17:12:10 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh --version 00:02:50.677 lcov: LCOV version 1.15 00:02:50.677 17:12:10 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info 00:02:58.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:58.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:58.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:05.437 17:12:24 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:03:05.437 17:12:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:05.437 17:12:24 -- common/autotest_common.sh@10 -- # set +x 00:03:05.437 17:12:24 -- spdk/autotest.sh@89 -- # rm -f 00:03:05.437 17:12:24 -- spdk/autotest.sh@92 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:07.976 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:07.976 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:07.976 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:07.976 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:07.976 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:07.976 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:08.236 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:08.236 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:08.236 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:08.236 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:08.236 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:08.236 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:08.236 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:08.236 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:08.236 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:08.495 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:08.495 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:08.495 17:12:28 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:03:08.495 17:12:28 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:08.495 17:12:28 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:08.495 17:12:28 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:08.495 17:12:28 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:08.495 17:12:28 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:08.495 17:12:28 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:08.495 17:12:28 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:08.495 17:12:28 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:08.495 17:12:28 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:03:08.495 17:12:28 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 00:03:08.495 17:12:28 -- spdk/autotest.sh@108 -- # grep -v p 00:03:08.495 17:12:28 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:08.495 17:12:28 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:08.495 17:12:28 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:03:08.495 17:12:28 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:08.495 17:12:28 -- scripts/common.sh@389 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:08.495 No valid GPT data, bailing 00:03:08.495 17:12:28 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:08.495 17:12:28 -- scripts/common.sh@393 -- # pt= 00:03:08.495 17:12:28 -- scripts/common.sh@394 -- # return 1 00:03:08.495 17:12:28 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:08.495 1+0 records in 00:03:08.495 1+0 records out 00:03:08.495 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00583785 s, 180 MB/s 00:03:08.495 17:12:28 -- spdk/autotest.sh@116 -- # sync 00:03:08.495 17:12:28 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:08.495 17:12:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:08.495 17:12:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:15.067 17:12:33 -- spdk/autotest.sh@122 -- # uname -s 00:03:15.067 17:12:33 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:03:15.067 17:12:33 -- spdk/autotest.sh@123 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:15.067 17:12:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:15.067 17:12:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:15.067 17:12:33 -- common/autotest_common.sh@10 -- # set +x 00:03:15.067 ************************************ 00:03:15.067 START TEST setup.sh 00:03:15.067 ************************************ 00:03:15.067 17:12:33 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:15.067 * Looking for test storage... 00:03:15.067 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:15.067 17:12:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:15.067 17:12:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:15.067 17:12:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:15.067 17:12:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:15.067 17:12:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:15.067 17:12:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:15.067 17:12:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:15.067 17:12:33 -- scripts/common.sh@335 -- # IFS=.-: 00:03:15.067 17:12:33 -- scripts/common.sh@335 -- # read -ra ver1 00:03:15.067 17:12:33 -- scripts/common.sh@336 -- # IFS=.-: 00:03:15.067 17:12:33 -- scripts/common.sh@336 -- # read -ra ver2 00:03:15.067 17:12:33 -- scripts/common.sh@337 -- # local 'op=<' 00:03:15.067 17:12:33 -- scripts/common.sh@339 -- # ver1_l=2 00:03:15.067 17:12:33 -- scripts/common.sh@340 -- # ver2_l=1 00:03:15.067 17:12:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:15.067 17:12:33 -- scripts/common.sh@343 -- # case "$op" in 00:03:15.067 17:12:33 -- scripts/common.sh@344 -- # : 1 00:03:15.067 17:12:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:15.067 17:12:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:15.067 17:12:33 -- scripts/common.sh@364 -- # decimal 1 00:03:15.067 17:12:33 -- scripts/common.sh@352 -- # local d=1 00:03:15.067 17:12:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:15.068 17:12:33 -- scripts/common.sh@354 -- # echo 1 00:03:15.068 17:12:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:15.068 17:12:33 -- scripts/common.sh@365 -- # decimal 2 00:03:15.068 17:12:33 -- scripts/common.sh@352 -- # local d=2 00:03:15.068 17:12:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:15.068 17:12:33 -- scripts/common.sh@354 -- # echo 2 00:03:15.068 17:12:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:15.068 17:12:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:15.068 17:12:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:15.068 17:12:33 -- scripts/common.sh@367 -- # return 0 00:03:15.068 17:12:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:15.068 17:12:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:15.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.068 --rc genhtml_branch_coverage=1 00:03:15.068 --rc genhtml_function_coverage=1 00:03:15.068 --rc genhtml_legend=1 00:03:15.068 --rc geninfo_all_blocks=1 00:03:15.068 --rc geninfo_unexecuted_blocks=1 00:03:15.068 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:15.068 ' 00:03:15.068 17:12:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:15.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.068 --rc genhtml_branch_coverage=1 00:03:15.068 --rc genhtml_function_coverage=1 00:03:15.068 --rc genhtml_legend=1 00:03:15.068 --rc geninfo_all_blocks=1 00:03:15.068 --rc geninfo_unexecuted_blocks=1 00:03:15.068 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:15.068 ' 00:03:15.068 17:12:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:15.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.068 --rc genhtml_branch_coverage=1 00:03:15.068 --rc genhtml_function_coverage=1 00:03:15.068 --rc genhtml_legend=1 00:03:15.068 --rc geninfo_all_blocks=1 00:03:15.068 --rc geninfo_unexecuted_blocks=1 00:03:15.068 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:15.068 ' 00:03:15.068 17:12:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:15.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.068 --rc genhtml_branch_coverage=1 00:03:15.068 --rc genhtml_function_coverage=1 00:03:15.068 --rc genhtml_legend=1 00:03:15.068 --rc geninfo_all_blocks=1 00:03:15.068 --rc geninfo_unexecuted_blocks=1 00:03:15.068 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:15.068 ' 00:03:15.068 17:12:33 -- setup/test-setup.sh@10 -- # uname -s 00:03:15.068 17:12:33 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:15.068 17:12:33 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:15.068 17:12:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:15.068 17:12:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:15.068 17:12:33 -- common/autotest_common.sh@10 -- # set +x 00:03:15.068 ************************************ 00:03:15.068 START TEST acl 00:03:15.068 ************************************ 00:03:15.068 17:12:33 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:15.068 * Looking for test storage... 00:03:15.068 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:15.068 17:12:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:15.068 17:12:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:15.068 17:12:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:15.068 17:12:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:15.068 17:12:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:15.068 17:12:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:15.068 17:12:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:15.068 17:12:34 -- scripts/common.sh@335 -- # IFS=.-: 00:03:15.068 17:12:34 -- scripts/common.sh@335 -- # read -ra ver1 00:03:15.068 17:12:34 -- scripts/common.sh@336 -- # IFS=.-: 00:03:15.068 17:12:34 -- scripts/common.sh@336 -- # read -ra ver2 00:03:15.068 17:12:34 -- scripts/common.sh@337 -- # local 'op=<' 00:03:15.068 17:12:34 -- scripts/common.sh@339 -- # ver1_l=2 00:03:15.068 17:12:34 -- scripts/common.sh@340 -- # ver2_l=1 00:03:15.068 17:12:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:15.068 17:12:34 -- scripts/common.sh@343 -- # case "$op" in 00:03:15.068 17:12:34 -- scripts/common.sh@344 -- # : 1 00:03:15.068 17:12:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:15.068 17:12:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:15.068 17:12:34 -- scripts/common.sh@364 -- # decimal 1 00:03:15.068 17:12:34 -- scripts/common.sh@352 -- # local d=1 00:03:15.068 17:12:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:15.068 17:12:34 -- scripts/common.sh@354 -- # echo 1 00:03:15.068 17:12:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:15.068 17:12:34 -- scripts/common.sh@365 -- # decimal 2 00:03:15.068 17:12:34 -- scripts/common.sh@352 -- # local d=2 00:03:15.068 17:12:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:15.068 17:12:34 -- scripts/common.sh@354 -- # echo 2 00:03:15.068 17:12:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:15.068 17:12:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:15.068 17:12:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:15.068 17:12:34 -- scripts/common.sh@367 -- # return 0 00:03:15.068 17:12:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:15.068 17:12:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:15.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.068 --rc genhtml_branch_coverage=1 00:03:15.068 --rc genhtml_function_coverage=1 00:03:15.068 --rc genhtml_legend=1 00:03:15.068 --rc geninfo_all_blocks=1 00:03:15.068 --rc geninfo_unexecuted_blocks=1 00:03:15.068 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:15.068 ' 00:03:15.068 17:12:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:15.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.068 --rc genhtml_branch_coverage=1 00:03:15.068 --rc genhtml_function_coverage=1 00:03:15.068 --rc genhtml_legend=1 00:03:15.068 --rc geninfo_all_blocks=1 00:03:15.068 --rc geninfo_unexecuted_blocks=1 00:03:15.068 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:15.068 ' 00:03:15.068 17:12:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:15.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.068 --rc genhtml_branch_coverage=1 00:03:15.068 --rc genhtml_function_coverage=1 00:03:15.068 --rc genhtml_legend=1 00:03:15.068 --rc geninfo_all_blocks=1 00:03:15.068 --rc geninfo_unexecuted_blocks=1 00:03:15.068 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:15.068 ' 00:03:15.068 17:12:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:15.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.068 --rc genhtml_branch_coverage=1 00:03:15.068 --rc genhtml_function_coverage=1 00:03:15.068 --rc genhtml_legend=1 00:03:15.068 --rc geninfo_all_blocks=1 00:03:15.068 --rc geninfo_unexecuted_blocks=1 00:03:15.068 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:15.068 ' 00:03:15.068 17:12:34 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:15.068 17:12:34 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:15.068 17:12:34 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:15.068 17:12:34 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:15.068 17:12:34 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:15.068 17:12:34 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:15.068 17:12:34 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:15.068 17:12:34 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:15.068 17:12:34 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:15.068 17:12:34 -- setup/acl.sh@12 -- # devs=() 00:03:15.068 17:12:34 -- setup/acl.sh@12 -- # declare -a devs 00:03:15.068 17:12:34 -- setup/acl.sh@13 -- # drivers=() 00:03:15.068 17:12:34 -- setup/acl.sh@13 -- # declare -A drivers 00:03:15.068 17:12:34 -- setup/acl.sh@51 -- # setup reset 00:03:15.068 17:12:34 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:15.068 17:12:34 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:18.362 17:12:37 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:18.362 17:12:37 -- setup/acl.sh@16 -- # local dev driver 00:03:18.362 17:12:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.362 17:12:37 -- setup/acl.sh@15 -- # setup output status 00:03:18.362 17:12:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.362 17:12:37 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:21.655 Hugepages 00:03:21.655 node hugesize free / total 00:03:21.655 17:12:40 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:21.655 17:12:40 -- setup/acl.sh@19 -- # continue 00:03:21.655 17:12:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.655 17:12:40 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:21.655 17:12:40 -- setup/acl.sh@19 -- # continue 00:03:21.655 17:12:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.655 17:12:40 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:21.655 17:12:40 -- setup/acl.sh@19 -- # continue 00:03:21.655 17:12:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.655 00:03:21.655 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:21.655 17:12:40 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:21.655 17:12:40 -- setup/acl.sh@19 -- # continue 00:03:21.655 17:12:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.655 17:12:40 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # continue 00:03:21.655 17:12:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.655 17:12:40 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # continue 00:03:21.655 17:12:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.655 17:12:40 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # continue 00:03:21.655 17:12:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.655 17:12:40 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # continue 00:03:21.655 17:12:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.655 17:12:40 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # continue 00:03:21.655 17:12:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.655 17:12:40 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # continue 00:03:21.655 17:12:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.655 17:12:40 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # continue 00:03:21.655 17:12:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.655 17:12:40 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # continue 00:03:21.655 17:12:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.655 17:12:40 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # continue 00:03:21.655 17:12:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.655 17:12:40 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # continue 00:03:21.655 17:12:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.655 17:12:40 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # continue 00:03:21.655 17:12:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.655 17:12:40 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # continue 00:03:21.655 17:12:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.655 17:12:40 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # continue 00:03:21.655 17:12:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.655 17:12:40 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # continue 00:03:21.655 17:12:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.655 17:12:40 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # continue 00:03:21.655 17:12:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.655 17:12:40 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.655 17:12:40 -- setup/acl.sh@20 -- # continue 00:03:21.655 17:12:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.655 17:12:41 -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:21.655 17:12:41 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:21.655 17:12:41 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:21.655 17:12:41 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:21.655 17:12:41 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:21.655 17:12:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.655 17:12:41 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:21.655 17:12:41 -- setup/acl.sh@54 -- # run_test denied denied 00:03:21.655 17:12:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:21.655 17:12:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:21.655 17:12:41 -- common/autotest_common.sh@10 -- # set +x 00:03:21.655 ************************************ 00:03:21.655 START TEST denied 00:03:21.655 ************************************ 00:03:21.655 17:12:41 -- common/autotest_common.sh@1114 -- # denied 00:03:21.655 17:12:41 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:21.655 17:12:41 -- setup/acl.sh@38 -- # setup output config 00:03:21.655 17:12:41 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:21.655 17:12:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.655 17:12:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:24.946 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:24.946 17:12:44 -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:24.946 17:12:44 -- setup/acl.sh@28 -- # local dev driver 00:03:24.946 17:12:44 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:24.946 17:12:44 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:24.946 17:12:44 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:24.946 17:12:44 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:24.946 17:12:44 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:24.946 17:12:44 -- setup/acl.sh@41 -- # setup reset 00:03:24.946 17:12:44 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:24.946 17:12:44 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:29.142 00:03:29.142 real 0m7.659s 00:03:29.142 user 0m2.250s 00:03:29.142 sys 0m4.589s 00:03:29.142 17:12:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:29.142 17:12:48 -- common/autotest_common.sh@10 -- # set +x 00:03:29.142 ************************************ 00:03:29.142 END TEST denied 00:03:29.142 ************************************ 00:03:29.142 17:12:48 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:29.142 17:12:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:29.142 17:12:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:29.142 17:12:48 -- common/autotest_common.sh@10 -- # set +x 00:03:29.142 ************************************ 00:03:29.142 START TEST allowed 00:03:29.142 ************************************ 00:03:29.142 17:12:48 -- common/autotest_common.sh@1114 -- # allowed 00:03:29.142 17:12:48 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:29.142 17:12:48 -- setup/acl.sh@45 -- # setup output config 00:03:29.142 17:12:48 -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:29.142 17:12:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.142 17:12:48 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:34.416 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:34.416 17:12:53 -- setup/acl.sh@47 -- # verify 00:03:34.416 17:12:53 -- setup/acl.sh@28 -- # local dev driver 00:03:34.416 17:12:53 -- setup/acl.sh@48 -- # setup reset 00:03:34.416 17:12:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:34.416 17:12:53 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:37.708 00:03:37.708 real 0m8.645s 00:03:37.708 user 0m2.361s 00:03:37.708 sys 0m4.802s 00:03:37.708 17:12:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:37.708 17:12:57 -- common/autotest_common.sh@10 -- # set +x 00:03:37.708 ************************************ 00:03:37.708 END TEST allowed 00:03:37.708 ************************************ 00:03:37.708 00:03:37.708 real 0m23.538s 00:03:37.708 user 0m7.246s 00:03:37.708 sys 0m14.287s 00:03:37.708 17:12:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:37.708 17:12:57 -- common/autotest_common.sh@10 -- # set +x 00:03:37.708 ************************************ 00:03:37.708 END TEST acl 00:03:37.708 ************************************ 00:03:37.708 17:12:57 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:37.708 17:12:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:37.708 17:12:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:37.708 17:12:57 -- common/autotest_common.sh@10 -- # set +x 00:03:37.708 ************************************ 00:03:37.708 START TEST hugepages 00:03:37.708 ************************************ 00:03:37.708 17:12:57 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:37.969 * Looking for test storage... 00:03:37.969 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:37.969 17:12:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:37.969 17:12:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:37.969 17:12:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:37.969 17:12:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:37.969 17:12:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:37.969 17:12:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:37.969 17:12:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:37.969 17:12:57 -- scripts/common.sh@335 -- # IFS=.-: 00:03:37.969 17:12:57 -- scripts/common.sh@335 -- # read -ra ver1 00:03:37.969 17:12:57 -- scripts/common.sh@336 -- # IFS=.-: 00:03:37.969 17:12:57 -- scripts/common.sh@336 -- # read -ra ver2 00:03:37.969 17:12:57 -- scripts/common.sh@337 -- # local 'op=<' 00:03:37.969 17:12:57 -- scripts/common.sh@339 -- # ver1_l=2 00:03:37.969 17:12:57 -- scripts/common.sh@340 -- # ver2_l=1 00:03:37.969 17:12:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:37.969 17:12:57 -- scripts/common.sh@343 -- # case "$op" in 00:03:37.969 17:12:57 -- scripts/common.sh@344 -- # : 1 00:03:37.969 17:12:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:37.969 17:12:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:37.969 17:12:57 -- scripts/common.sh@364 -- # decimal 1 00:03:37.969 17:12:57 -- scripts/common.sh@352 -- # local d=1 00:03:37.969 17:12:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:37.969 17:12:57 -- scripts/common.sh@354 -- # echo 1 00:03:37.969 17:12:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:37.969 17:12:57 -- scripts/common.sh@365 -- # decimal 2 00:03:37.969 17:12:57 -- scripts/common.sh@352 -- # local d=2 00:03:37.969 17:12:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:37.969 17:12:57 -- scripts/common.sh@354 -- # echo 2 00:03:37.969 17:12:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:37.969 17:12:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:37.969 17:12:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:37.969 17:12:57 -- scripts/common.sh@367 -- # return 0 00:03:37.969 17:12:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:37.969 17:12:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:37.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.969 --rc genhtml_branch_coverage=1 00:03:37.969 --rc genhtml_function_coverage=1 00:03:37.969 --rc genhtml_legend=1 00:03:37.969 --rc geninfo_all_blocks=1 00:03:37.969 --rc geninfo_unexecuted_blocks=1 00:03:37.969 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:37.969 ' 00:03:37.969 17:12:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:37.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.969 --rc genhtml_branch_coverage=1 00:03:37.969 --rc genhtml_function_coverage=1 00:03:37.969 --rc genhtml_legend=1 00:03:37.969 --rc geninfo_all_blocks=1 00:03:37.969 --rc geninfo_unexecuted_blocks=1 00:03:37.969 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:37.969 ' 00:03:37.969 17:12:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:37.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.969 --rc genhtml_branch_coverage=1 00:03:37.969 --rc genhtml_function_coverage=1 00:03:37.969 --rc genhtml_legend=1 00:03:37.969 --rc geninfo_all_blocks=1 00:03:37.969 --rc geninfo_unexecuted_blocks=1 00:03:37.969 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:37.969 ' 00:03:37.969 17:12:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:37.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.969 --rc genhtml_branch_coverage=1 00:03:37.969 --rc genhtml_function_coverage=1 00:03:37.969 --rc genhtml_legend=1 00:03:37.969 --rc geninfo_all_blocks=1 00:03:37.969 --rc geninfo_unexecuted_blocks=1 00:03:37.969 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:37.969 ' 00:03:37.969 17:12:57 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:37.969 17:12:57 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:37.969 17:12:57 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:37.969 17:12:57 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:37.969 17:12:57 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:37.969 17:12:57 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:37.969 17:12:57 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:37.969 17:12:57 -- setup/common.sh@18 -- # local node= 00:03:37.969 17:12:57 -- setup/common.sh@19 -- # local var val 00:03:37.969 17:12:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.969 17:12:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.969 17:12:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.969 17:12:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.969 17:12:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.969 17:12:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.969 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.969 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.969 17:12:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283768 kB' 'MemFree: 41761876 kB' 'MemAvailable: 43598264 kB' 'Buffers: 3740 kB' 'Cached: 9890664 kB' 'SwapCached: 32 kB' 'Active: 8878176 kB' 'Inactive: 1533412 kB' 'Active(anon): 8369708 kB' 'Inactive(anon): 17732 kB' 'Active(file): 508468 kB' 'Inactive(file): 1515680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8386044 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520496 kB' 'Mapped: 185320 kB' 'Shmem: 7870256 kB' 'KReclaimable: 484452 kB' 'Slab: 1400292 kB' 'SReclaimable: 484452 kB' 'SUnreclaim: 915840 kB' 'KernelStack: 21888 kB' 'PageTables: 8496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36433336 kB' 'Committed_AS: 9578820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217876 kB' 'VmallocChunk: 0 kB' 'Percpu: 99904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2542964 kB' 'DirectMap2M: 50620416 kB' 'DirectMap1G: 16777216 kB' 00:03:37.969 17:12:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.969 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.969 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.969 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.969 17:12:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.969 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.969 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.969 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.969 17:12:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.969 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.969 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.969 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.969 17:12:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.969 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.969 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.969 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.969 17:12:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.969 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.969 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.969 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.969 17:12:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.969 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.969 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.969 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.969 17:12:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.970 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.970 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.971 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.971 17:12:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.971 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.971 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.971 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.971 17:12:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.971 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.971 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.971 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.971 17:12:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.971 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.971 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.971 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.971 17:12:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.971 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.971 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.971 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.971 17:12:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.971 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.971 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.971 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.971 17:12:57 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.971 17:12:57 -- setup/common.sh@33 -- # echo 2048 00:03:37.971 17:12:57 -- setup/common.sh@33 -- # return 0 00:03:37.971 17:12:57 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:37.971 17:12:57 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:37.971 17:12:57 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:37.971 17:12:57 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:37.971 17:12:57 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:37.971 17:12:57 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:37.971 17:12:57 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:37.971 17:12:57 -- setup/hugepages.sh@207 -- # get_nodes 00:03:37.971 17:12:57 -- setup/hugepages.sh@27 -- # local node 00:03:37.971 17:12:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.971 17:12:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:37.971 17:12:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.971 17:12:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:37.971 17:12:57 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:37.971 17:12:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:37.971 17:12:57 -- setup/hugepages.sh@208 -- # clear_hp 00:03:37.971 17:12:57 -- setup/hugepages.sh@37 -- # local node hp 00:03:37.971 17:12:57 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:37.971 17:12:57 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:37.971 17:12:57 -- setup/hugepages.sh@41 -- # echo 0 00:03:37.971 17:12:57 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:37.971 17:12:57 -- setup/hugepages.sh@41 -- # echo 0 00:03:37.971 17:12:57 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:37.971 17:12:57 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:37.971 17:12:57 -- setup/hugepages.sh@41 -- # echo 0 00:03:37.971 17:12:57 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:37.971 17:12:57 -- setup/hugepages.sh@41 -- # echo 0 00:03:37.971 17:12:57 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:37.971 17:12:57 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:37.971 17:12:57 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:37.971 17:12:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:37.971 17:12:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:37.971 17:12:57 -- common/autotest_common.sh@10 -- # set +x 00:03:37.971 ************************************ 00:03:37.971 START TEST default_setup 00:03:37.971 ************************************ 00:03:37.971 17:12:57 -- common/autotest_common.sh@1114 -- # default_setup 00:03:37.971 17:12:57 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:37.971 17:12:57 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:37.971 17:12:57 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:37.971 17:12:57 -- setup/hugepages.sh@51 -- # shift 00:03:37.971 17:12:57 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:37.971 17:12:57 -- setup/hugepages.sh@52 -- # local node_ids 00:03:37.971 17:12:57 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:37.971 17:12:57 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:37.971 17:12:57 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:37.971 17:12:57 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:37.971 17:12:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:37.971 17:12:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:37.971 17:12:57 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:37.971 17:12:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:37.971 17:12:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:37.971 17:12:57 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:37.971 17:12:57 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:37.971 17:12:57 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:37.971 17:12:57 -- setup/hugepages.sh@73 -- # return 0 00:03:37.971 17:12:57 -- setup/hugepages.sh@137 -- # setup output 00:03:37.971 17:12:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.971 17:12:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:42.170 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:42.170 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:42.170 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:42.170 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:42.170 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:42.170 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:42.170 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:42.170 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:42.170 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:42.170 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:42.170 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:42.170 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:42.170 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:42.170 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:42.170 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:42.170 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:43.109 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:43.373 17:13:02 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:43.373 17:13:02 -- setup/hugepages.sh@89 -- # local node 00:03:43.373 17:13:02 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:43.373 17:13:02 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:43.373 17:13:02 -- setup/hugepages.sh@92 -- # local surp 00:03:43.373 17:13:02 -- setup/hugepages.sh@93 -- # local resv 00:03:43.373 17:13:02 -- setup/hugepages.sh@94 -- # local anon 00:03:43.373 17:13:02 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:43.373 17:13:02 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:43.373 17:13:02 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:43.373 17:13:02 -- setup/common.sh@18 -- # local node= 00:03:43.373 17:13:02 -- setup/common.sh@19 -- # local var val 00:03:43.373 17:13:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.373 17:13:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.373 17:13:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.373 17:13:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.373 17:13:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.373 17:13:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 17:13:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283768 kB' 'MemFree: 44004260 kB' 'MemAvailable: 45840616 kB' 'Buffers: 3740 kB' 'Cached: 9890808 kB' 'SwapCached: 32 kB' 'Active: 8879852 kB' 'Inactive: 1533412 kB' 'Active(anon): 8371384 kB' 'Inactive(anon): 17732 kB' 'Active(file): 508468 kB' 'Inactive(file): 1515680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8386044 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522596 kB' 'Mapped: 185060 kB' 'Shmem: 7870400 kB' 'KReclaimable: 484420 kB' 'Slab: 1398724 kB' 'SReclaimable: 484420 kB' 'SUnreclaim: 914304 kB' 'KernelStack: 21936 kB' 'PageTables: 8656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481912 kB' 'Committed_AS: 9581180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218052 kB' 'VmallocChunk: 0 kB' 'Percpu: 99904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2542964 kB' 'DirectMap2M: 50620416 kB' 'DirectMap1G: 16777216 kB' 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.373 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.374 17:13:02 -- setup/common.sh@33 -- # echo 0 00:03:43.374 17:13:02 -- setup/common.sh@33 -- # return 0 00:03:43.374 17:13:02 -- setup/hugepages.sh@97 -- # anon=0 00:03:43.374 17:13:02 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:43.374 17:13:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.374 17:13:02 -- setup/common.sh@18 -- # local node= 00:03:43.374 17:13:02 -- setup/common.sh@19 -- # local var val 00:03:43.374 17:13:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.374 17:13:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.374 17:13:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.374 17:13:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.374 17:13:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.374 17:13:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.374 17:13:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283768 kB' 'MemFree: 44009496 kB' 'MemAvailable: 45845852 kB' 'Buffers: 3740 kB' 'Cached: 9890808 kB' 'SwapCached: 32 kB' 'Active: 8880300 kB' 'Inactive: 1533412 kB' 'Active(anon): 8371832 kB' 'Inactive(anon): 17732 kB' 'Active(file): 508468 kB' 'Inactive(file): 1515680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8386044 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522480 kB' 'Mapped: 185076 kB' 'Shmem: 7870400 kB' 'KReclaimable: 484420 kB' 'Slab: 1398740 kB' 'SReclaimable: 484420 kB' 'SUnreclaim: 914320 kB' 'KernelStack: 21984 kB' 'PageTables: 8580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481912 kB' 'Committed_AS: 9581192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218036 kB' 'VmallocChunk: 0 kB' 'Percpu: 99904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2542964 kB' 'DirectMap2M: 50620416 kB' 'DirectMap1G: 16777216 kB' 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.374 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.374 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.375 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.375 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.376 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.376 17:13:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.376 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.376 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.376 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.376 17:13:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.376 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.376 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.376 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.376 17:13:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.376 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.376 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.376 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.376 17:13:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.376 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.376 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.376 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.376 17:13:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.376 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.376 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.376 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.376 17:13:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.376 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.376 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.376 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.376 17:13:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.376 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.376 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.376 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.376 17:13:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.376 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.376 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.376 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.376 17:13:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.376 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.376 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.376 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.376 17:13:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.376 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.376 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.376 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.376 17:13:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.376 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.376 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.376 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.376 17:13:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.376 17:13:02 -- setup/common.sh@32 -- # continue 00:03:43.376 17:13:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.376 17:13:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.376 17:13:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.376 17:13:02 -- setup/common.sh@33 -- # echo 0 00:03:43.376 17:13:02 -- setup/common.sh@33 -- # return 0 00:03:43.376 17:13:02 -- setup/hugepages.sh@99 -- # surp=0 00:03:43.376 17:13:02 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:43.376 17:13:03 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:43.376 17:13:03 -- setup/common.sh@18 -- # local node= 00:03:43.376 17:13:03 -- setup/common.sh@19 -- # local var val 00:03:43.376 17:13:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.376 17:13:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.376 17:13:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.376 17:13:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.376 17:13:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.376 17:13:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.376 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.376 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.376 17:13:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283768 kB' 'MemFree: 44010132 kB' 'MemAvailable: 45846488 kB' 'Buffers: 3740 kB' 'Cached: 9890820 kB' 'SwapCached: 32 kB' 'Active: 8879144 kB' 'Inactive: 1533412 kB' 'Active(anon): 8370676 kB' 'Inactive(anon): 17732 kB' 'Active(file): 508468 kB' 'Inactive(file): 1515680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8386044 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521776 kB' 'Mapped: 185056 kB' 'Shmem: 7870412 kB' 'KReclaimable: 484420 kB' 'Slab: 1398804 kB' 'SReclaimable: 484420 kB' 'SUnreclaim: 914384 kB' 'KernelStack: 22000 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481912 kB' 'Committed_AS: 9579696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218116 kB' 'VmallocChunk: 0 kB' 'Percpu: 99904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2542964 kB' 'DirectMap2M: 50620416 kB' 'DirectMap1G: 16777216 kB' 00:03:43.376 17:13:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.376 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.376 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.376 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.376 17:13:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.376 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.376 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.376 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.376 17:13:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.376 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.376 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.376 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.376 17:13:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.376 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.376 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.376 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.376 17:13:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.376 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.376 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.376 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.376 17:13:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.376 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.376 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.376 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.376 17:13:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.376 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.376 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.376 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.376 17:13:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.376 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.376 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.376 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.376 17:13:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.376 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.376 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.376 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.376 17:13:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.376 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.376 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.376 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.376 17:13:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.376 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.376 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.376 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.376 17:13:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.376 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.376 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.376 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.376 17:13:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.376 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.376 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.376 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.376 17:13:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.376 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.376 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.377 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.377 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.378 17:13:03 -- setup/common.sh@33 -- # echo 0 00:03:43.378 17:13:03 -- setup/common.sh@33 -- # return 0 00:03:43.378 17:13:03 -- setup/hugepages.sh@100 -- # resv=0 00:03:43.378 17:13:03 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:43.378 nr_hugepages=1024 00:03:43.378 17:13:03 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:43.378 resv_hugepages=0 00:03:43.378 17:13:03 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:43.378 surplus_hugepages=0 00:03:43.378 17:13:03 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:43.378 anon_hugepages=0 00:03:43.378 17:13:03 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.378 17:13:03 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:43.378 17:13:03 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:43.378 17:13:03 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:43.378 17:13:03 -- setup/common.sh@18 -- # local node= 00:03:43.378 17:13:03 -- setup/common.sh@19 -- # local var val 00:03:43.378 17:13:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.378 17:13:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.378 17:13:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.378 17:13:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.378 17:13:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.378 17:13:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.378 17:13:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283768 kB' 'MemFree: 44011716 kB' 'MemAvailable: 45848072 kB' 'Buffers: 3740 kB' 'Cached: 9890840 kB' 'SwapCached: 32 kB' 'Active: 8879728 kB' 'Inactive: 1533412 kB' 'Active(anon): 8371260 kB' 'Inactive(anon): 17732 kB' 'Active(file): 508468 kB' 'Inactive(file): 1515680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8386044 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521912 kB' 'Mapped: 185056 kB' 'Shmem: 7870432 kB' 'KReclaimable: 484420 kB' 'Slab: 1398804 kB' 'SReclaimable: 484420 kB' 'SUnreclaim: 914384 kB' 'KernelStack: 21920 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481912 kB' 'Committed_AS: 9579712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218084 kB' 'VmallocChunk: 0 kB' 'Percpu: 99904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2542964 kB' 'DirectMap2M: 50620416 kB' 'DirectMap1G: 16777216 kB' 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.378 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.378 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.379 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.379 17:13:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.379 17:13:03 -- setup/common.sh@33 -- # echo 1024 00:03:43.379 17:13:03 -- setup/common.sh@33 -- # return 0 00:03:43.379 17:13:03 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.379 17:13:03 -- setup/hugepages.sh@112 -- # get_nodes 00:03:43.379 17:13:03 -- setup/hugepages.sh@27 -- # local node 00:03:43.379 17:13:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.379 17:13:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:43.379 17:13:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.379 17:13:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:43.379 17:13:03 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:43.379 17:13:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.379 17:13:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.379 17:13:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.380 17:13:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:43.380 17:13:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.380 17:13:03 -- setup/common.sh@18 -- # local node=0 00:03:43.380 17:13:03 -- setup/common.sh@19 -- # local var val 00:03:43.380 17:13:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.380 17:13:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.380 17:13:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:43.380 17:13:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:43.380 17:13:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.380 17:13:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 25037652 kB' 'MemUsed: 7547716 kB' 'SwapCached: 32 kB' 'Active: 3909944 kB' 'Inactive: 270204 kB' 'Active(anon): 3528988 kB' 'Inactive(anon): 60 kB' 'Active(file): 380956 kB' 'Inactive(file): 270144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3753108 kB' 'Mapped: 131356 kB' 'AnonPages: 430276 kB' 'Shmem: 3101976 kB' 'KernelStack: 12152 kB' 'PageTables: 5652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 254632 kB' 'Slab: 668656 kB' 'SReclaimable: 254632 kB' 'SUnreclaim: 414024 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.380 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.380 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.381 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.381 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.381 17:13:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.381 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.381 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.381 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.381 17:13:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.381 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.381 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.381 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.381 17:13:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.381 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.381 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.381 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.381 17:13:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.381 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.381 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.381 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.381 17:13:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.381 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.381 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.381 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.381 17:13:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.381 17:13:03 -- setup/common.sh@32 -- # continue 00:03:43.381 17:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.381 17:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.381 17:13:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.381 17:13:03 -- setup/common.sh@33 -- # echo 0 00:03:43.381 17:13:03 -- setup/common.sh@33 -- # return 0 00:03:43.381 17:13:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.381 17:13:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.381 17:13:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.381 17:13:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.381 17:13:03 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:43.381 node0=1024 expecting 1024 00:03:43.381 17:13:03 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:43.381 00:03:43.381 real 0m5.393s 00:03:43.381 user 0m1.439s 00:03:43.381 sys 0m2.459s 00:03:43.381 17:13:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:43.381 17:13:03 -- common/autotest_common.sh@10 -- # set +x 00:03:43.381 ************************************ 00:03:43.381 END TEST default_setup 00:03:43.381 ************************************ 00:03:43.381 17:13:03 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:43.381 17:13:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:43.381 17:13:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:43.381 17:13:03 -- common/autotest_common.sh@10 -- # set +x 00:03:43.381 ************************************ 00:03:43.381 START TEST per_node_1G_alloc 00:03:43.381 ************************************ 00:03:43.381 17:13:03 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:03:43.381 17:13:03 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:43.381 17:13:03 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:43.381 17:13:03 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:43.381 17:13:03 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:43.381 17:13:03 -- setup/hugepages.sh@51 -- # shift 00:03:43.381 17:13:03 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:43.381 17:13:03 -- setup/hugepages.sh@52 -- # local node_ids 00:03:43.381 17:13:03 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:43.381 17:13:03 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:43.381 17:13:03 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:43.381 17:13:03 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:43.381 17:13:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.381 17:13:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:43.381 17:13:03 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:43.381 17:13:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.381 17:13:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.381 17:13:03 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:43.381 17:13:03 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:43.381 17:13:03 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:43.381 17:13:03 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:43.381 17:13:03 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:43.381 17:13:03 -- setup/hugepages.sh@73 -- # return 0 00:03:43.381 17:13:03 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:43.381 17:13:03 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:43.381 17:13:03 -- setup/hugepages.sh@146 -- # setup output 00:03:43.381 17:13:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.381 17:13:03 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:46.678 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:46.678 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:46.678 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:46.678 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:46.678 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:46.678 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:46.678 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:46.678 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:46.678 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:46.678 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:46.678 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:46.678 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:46.678 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:46.678 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:46.678 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:46.678 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:46.678 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:46.942 17:13:06 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:46.942 17:13:06 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:46.942 17:13:06 -- setup/hugepages.sh@89 -- # local node 00:03:46.942 17:13:06 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.942 17:13:06 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.942 17:13:06 -- setup/hugepages.sh@92 -- # local surp 00:03:46.942 17:13:06 -- setup/hugepages.sh@93 -- # local resv 00:03:46.942 17:13:06 -- setup/hugepages.sh@94 -- # local anon 00:03:46.942 17:13:06 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.942 17:13:06 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.942 17:13:06 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.942 17:13:06 -- setup/common.sh@18 -- # local node= 00:03:46.942 17:13:06 -- setup/common.sh@19 -- # local var val 00:03:46.942 17:13:06 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.942 17:13:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.942 17:13:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.942 17:13:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.942 17:13:06 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.942 17:13:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.942 17:13:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283768 kB' 'MemFree: 43996116 kB' 'MemAvailable: 45832472 kB' 'Buffers: 3740 kB' 'Cached: 9890932 kB' 'SwapCached: 32 kB' 'Active: 8879088 kB' 'Inactive: 1533412 kB' 'Active(anon): 8370620 kB' 'Inactive(anon): 17732 kB' 'Active(file): 508468 kB' 'Inactive(file): 1515680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8386044 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521000 kB' 'Mapped: 184144 kB' 'Shmem: 7870524 kB' 'KReclaimable: 484420 kB' 'Slab: 1399608 kB' 'SReclaimable: 484420 kB' 'SUnreclaim: 915188 kB' 'KernelStack: 21824 kB' 'PageTables: 8136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481912 kB' 'Committed_AS: 9571244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218100 kB' 'VmallocChunk: 0 kB' 'Percpu: 99904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2542964 kB' 'DirectMap2M: 50620416 kB' 'DirectMap1G: 16777216 kB' 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.942 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.942 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.943 17:13:06 -- setup/common.sh@33 -- # echo 0 00:03:46.943 17:13:06 -- setup/common.sh@33 -- # return 0 00:03:46.943 17:13:06 -- setup/hugepages.sh@97 -- # anon=0 00:03:46.943 17:13:06 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.943 17:13:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.943 17:13:06 -- setup/common.sh@18 -- # local node= 00:03:46.943 17:13:06 -- setup/common.sh@19 -- # local var val 00:03:46.943 17:13:06 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.943 17:13:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.943 17:13:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.943 17:13:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.943 17:13:06 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.943 17:13:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.943 17:13:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283768 kB' 'MemFree: 43997124 kB' 'MemAvailable: 45833480 kB' 'Buffers: 3740 kB' 'Cached: 9890936 kB' 'SwapCached: 32 kB' 'Active: 8879340 kB' 'Inactive: 1533412 kB' 'Active(anon): 8370872 kB' 'Inactive(anon): 17732 kB' 'Active(file): 508468 kB' 'Inactive(file): 1515680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8386044 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521212 kB' 'Mapped: 184120 kB' 'Shmem: 7870528 kB' 'KReclaimable: 484420 kB' 'Slab: 1399608 kB' 'SReclaimable: 484420 kB' 'SUnreclaim: 915188 kB' 'KernelStack: 21792 kB' 'PageTables: 8036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481912 kB' 'Committed_AS: 9571256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218068 kB' 'VmallocChunk: 0 kB' 'Percpu: 99904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2542964 kB' 'DirectMap2M: 50620416 kB' 'DirectMap1G: 16777216 kB' 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.943 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.943 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.944 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.944 17:13:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.945 17:13:06 -- setup/common.sh@33 -- # echo 0 00:03:46.945 17:13:06 -- setup/common.sh@33 -- # return 0 00:03:46.945 17:13:06 -- setup/hugepages.sh@99 -- # surp=0 00:03:46.945 17:13:06 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.945 17:13:06 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.945 17:13:06 -- setup/common.sh@18 -- # local node= 00:03:46.945 17:13:06 -- setup/common.sh@19 -- # local var val 00:03:46.945 17:13:06 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.945 17:13:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.945 17:13:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.945 17:13:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.945 17:13:06 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.945 17:13:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.945 17:13:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283768 kB' 'MemFree: 43996788 kB' 'MemAvailable: 45833144 kB' 'Buffers: 3740 kB' 'Cached: 9890944 kB' 'SwapCached: 32 kB' 'Active: 8879652 kB' 'Inactive: 1533412 kB' 'Active(anon): 8371184 kB' 'Inactive(anon): 17732 kB' 'Active(file): 508468 kB' 'Inactive(file): 1515680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8386044 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521572 kB' 'Mapped: 184044 kB' 'Shmem: 7870536 kB' 'KReclaimable: 484420 kB' 'Slab: 1399572 kB' 'SReclaimable: 484420 kB' 'SUnreclaim: 915152 kB' 'KernelStack: 21824 kB' 'PageTables: 8112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481912 kB' 'Committed_AS: 9571272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218052 kB' 'VmallocChunk: 0 kB' 'Percpu: 99904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2542964 kB' 'DirectMap2M: 50620416 kB' 'DirectMap1G: 16777216 kB' 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.945 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.945 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.946 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.946 17:13:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.946 17:13:06 -- setup/common.sh@33 -- # echo 0 00:03:46.946 17:13:06 -- setup/common.sh@33 -- # return 0 00:03:46.946 17:13:06 -- setup/hugepages.sh@100 -- # resv=0 00:03:46.947 17:13:06 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:46.947 nr_hugepages=1024 00:03:46.947 17:13:06 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.947 resv_hugepages=0 00:03:46.947 17:13:06 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.947 surplus_hugepages=0 00:03:46.947 17:13:06 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.947 anon_hugepages=0 00:03:46.947 17:13:06 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.947 17:13:06 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:46.947 17:13:06 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.947 17:13:06 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.947 17:13:06 -- setup/common.sh@18 -- # local node= 00:03:46.947 17:13:06 -- setup/common.sh@19 -- # local var val 00:03:46.947 17:13:06 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.947 17:13:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.947 17:13:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.947 17:13:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.947 17:13:06 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.947 17:13:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.947 17:13:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283768 kB' 'MemFree: 43997884 kB' 'MemAvailable: 45834240 kB' 'Buffers: 3740 kB' 'Cached: 9890960 kB' 'SwapCached: 32 kB' 'Active: 8879320 kB' 'Inactive: 1533412 kB' 'Active(anon): 8370852 kB' 'Inactive(anon): 17732 kB' 'Active(file): 508468 kB' 'Inactive(file): 1515680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8386044 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521228 kB' 'Mapped: 184044 kB' 'Shmem: 7870552 kB' 'KReclaimable: 484420 kB' 'Slab: 1399572 kB' 'SReclaimable: 484420 kB' 'SUnreclaim: 915152 kB' 'KernelStack: 21808 kB' 'PageTables: 8064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481912 kB' 'Committed_AS: 9571284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218052 kB' 'VmallocChunk: 0 kB' 'Percpu: 99904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2542964 kB' 'DirectMap2M: 50620416 kB' 'DirectMap1G: 16777216 kB' 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.947 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.947 17:13:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.948 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.948 17:13:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.948 17:13:06 -- setup/common.sh@33 -- # echo 1024 00:03:46.948 17:13:06 -- setup/common.sh@33 -- # return 0 00:03:46.948 17:13:06 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.948 17:13:06 -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.948 17:13:06 -- setup/hugepages.sh@27 -- # local node 00:03:46.948 17:13:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.948 17:13:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:46.948 17:13:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.948 17:13:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:46.948 17:13:06 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:46.948 17:13:06 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.948 17:13:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.948 17:13:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.948 17:13:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.948 17:13:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.948 17:13:06 -- setup/common.sh@18 -- # local node=0 00:03:46.948 17:13:06 -- setup/common.sh@19 -- # local var val 00:03:46.948 17:13:06 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.948 17:13:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.948 17:13:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.948 17:13:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.948 17:13:06 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.948 17:13:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 26059208 kB' 'MemUsed: 6526160 kB' 'SwapCached: 32 kB' 'Active: 3910316 kB' 'Inactive: 270204 kB' 'Active(anon): 3529360 kB' 'Inactive(anon): 60 kB' 'Active(file): 380956 kB' 'Inactive(file): 270144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3753200 kB' 'Mapped: 130412 kB' 'AnonPages: 430508 kB' 'Shmem: 3102068 kB' 'KernelStack: 12120 kB' 'PageTables: 5548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 254632 kB' 'Slab: 669284 kB' 'SReclaimable: 254632 kB' 'SUnreclaim: 414652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.949 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.949 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.950 17:13:06 -- setup/common.sh@33 -- # echo 0 00:03:46.950 17:13:06 -- setup/common.sh@33 -- # return 0 00:03:46.950 17:13:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.950 17:13:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.950 17:13:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.950 17:13:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:46.950 17:13:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.950 17:13:06 -- setup/common.sh@18 -- # local node=1 00:03:46.950 17:13:06 -- setup/common.sh@19 -- # local var val 00:03:46.950 17:13:06 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.950 17:13:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.950 17:13:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:46.950 17:13:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:46.950 17:13:06 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.950 17:13:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.950 17:13:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698400 kB' 'MemFree: 17938928 kB' 'MemUsed: 9759472 kB' 'SwapCached: 0 kB' 'Active: 4969048 kB' 'Inactive: 1263208 kB' 'Active(anon): 4841536 kB' 'Inactive(anon): 17672 kB' 'Active(file): 127512 kB' 'Inactive(file): 1245536 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6141548 kB' 'Mapped: 53632 kB' 'AnonPages: 90708 kB' 'Shmem: 4768500 kB' 'KernelStack: 9688 kB' 'PageTables: 2516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 229788 kB' 'Slab: 730288 kB' 'SReclaimable: 229788 kB' 'SUnreclaim: 500500 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.950 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.950 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.951 17:13:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.951 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.951 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.951 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.951 17:13:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.951 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.951 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.951 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.951 17:13:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.951 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.951 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.951 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.951 17:13:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.951 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.951 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.951 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.951 17:13:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.951 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.951 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.951 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.951 17:13:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.951 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.951 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.951 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.951 17:13:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.951 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.951 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.951 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.951 17:13:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.951 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.951 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.951 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.951 17:13:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.951 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.951 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.951 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.951 17:13:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.951 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.951 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.951 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.951 17:13:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.951 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.951 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.951 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.951 17:13:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.951 17:13:06 -- setup/common.sh@32 -- # continue 00:03:46.951 17:13:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.951 17:13:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.951 17:13:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.951 17:13:06 -- setup/common.sh@33 -- # echo 0 00:03:46.951 17:13:06 -- setup/common.sh@33 -- # return 0 00:03:46.951 17:13:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.951 17:13:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.951 17:13:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.951 17:13:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.951 17:13:06 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:46.951 node0=512 expecting 512 00:03:46.951 17:13:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.951 17:13:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.951 17:13:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.951 17:13:06 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:46.951 node1=512 expecting 512 00:03:46.951 17:13:06 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:46.951 00:03:46.951 real 0m3.535s 00:03:46.951 user 0m1.340s 00:03:46.951 sys 0m2.266s 00:03:46.951 17:13:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:46.951 17:13:06 -- common/autotest_common.sh@10 -- # set +x 00:03:46.951 ************************************ 00:03:46.951 END TEST per_node_1G_alloc 00:03:46.951 ************************************ 00:03:46.951 17:13:06 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:46.951 17:13:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:46.951 17:13:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:46.951 17:13:06 -- common/autotest_common.sh@10 -- # set +x 00:03:46.951 ************************************ 00:03:46.951 START TEST even_2G_alloc 00:03:46.951 ************************************ 00:03:46.951 17:13:06 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:03:46.951 17:13:06 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:46.951 17:13:06 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:46.951 17:13:06 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:46.951 17:13:06 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.951 17:13:06 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:46.951 17:13:06 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:46.951 17:13:06 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:46.951 17:13:06 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.951 17:13:06 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:46.951 17:13:06 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:46.951 17:13:06 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.211 17:13:06 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.211 17:13:06 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:47.211 17:13:06 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:47.211 17:13:06 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.211 17:13:06 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:47.211 17:13:06 -- setup/hugepages.sh@83 -- # : 512 00:03:47.211 17:13:06 -- setup/hugepages.sh@84 -- # : 1 00:03:47.211 17:13:06 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.211 17:13:06 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:47.211 17:13:06 -- setup/hugepages.sh@83 -- # : 0 00:03:47.211 17:13:06 -- setup/hugepages.sh@84 -- # : 0 00:03:47.211 17:13:06 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.211 17:13:06 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:47.211 17:13:06 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:47.211 17:13:06 -- setup/hugepages.sh@153 -- # setup output 00:03:47.211 17:13:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.211 17:13:06 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:50.515 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:50.515 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:50.515 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:50.515 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:50.515 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:50.515 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:50.515 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:50.515 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:50.515 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:50.515 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:50.515 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:50.515 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:50.515 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:50.515 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:50.515 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:50.515 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:50.515 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:50.515 17:13:10 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:50.515 17:13:10 -- setup/hugepages.sh@89 -- # local node 00:03:50.515 17:13:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:50.515 17:13:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:50.515 17:13:10 -- setup/hugepages.sh@92 -- # local surp 00:03:50.515 17:13:10 -- setup/hugepages.sh@93 -- # local resv 00:03:50.515 17:13:10 -- setup/hugepages.sh@94 -- # local anon 00:03:50.515 17:13:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:50.515 17:13:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:50.515 17:13:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:50.515 17:13:10 -- setup/common.sh@18 -- # local node= 00:03:50.515 17:13:10 -- setup/common.sh@19 -- # local var val 00:03:50.515 17:13:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.515 17:13:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.515 17:13:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.515 17:13:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.515 17:13:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.515 17:13:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.515 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.515 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283768 kB' 'MemFree: 44000100 kB' 'MemAvailable: 45836456 kB' 'Buffers: 3740 kB' 'Cached: 9891052 kB' 'SwapCached: 32 kB' 'Active: 8879792 kB' 'Inactive: 1533412 kB' 'Active(anon): 8371324 kB' 'Inactive(anon): 17732 kB' 'Active(file): 508468 kB' 'Inactive(file): 1515680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8386044 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522456 kB' 'Mapped: 184060 kB' 'Shmem: 7870644 kB' 'KReclaimable: 484420 kB' 'Slab: 1399032 kB' 'SReclaimable: 484420 kB' 'SUnreclaim: 914612 kB' 'KernelStack: 21840 kB' 'PageTables: 8168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481912 kB' 'Committed_AS: 9571888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217940 kB' 'VmallocChunk: 0 kB' 'Percpu: 99904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2542964 kB' 'DirectMap2M: 50620416 kB' 'DirectMap1G: 16777216 kB' 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.516 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.516 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.517 17:13:10 -- setup/common.sh@33 -- # echo 0 00:03:50.517 17:13:10 -- setup/common.sh@33 -- # return 0 00:03:50.517 17:13:10 -- setup/hugepages.sh@97 -- # anon=0 00:03:50.517 17:13:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:50.517 17:13:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.517 17:13:10 -- setup/common.sh@18 -- # local node= 00:03:50.517 17:13:10 -- setup/common.sh@19 -- # local var val 00:03:50.517 17:13:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.517 17:13:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.517 17:13:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.517 17:13:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.517 17:13:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.517 17:13:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 17:13:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283768 kB' 'MemFree: 44000792 kB' 'MemAvailable: 45837148 kB' 'Buffers: 3740 kB' 'Cached: 9891052 kB' 'SwapCached: 32 kB' 'Active: 8880948 kB' 'Inactive: 1533412 kB' 'Active(anon): 8372480 kB' 'Inactive(anon): 17732 kB' 'Active(file): 508468 kB' 'Inactive(file): 1515680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8386044 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523076 kB' 'Mapped: 184060 kB' 'Shmem: 7870644 kB' 'KReclaimable: 484420 kB' 'Slab: 1399084 kB' 'SReclaimable: 484420 kB' 'SUnreclaim: 914664 kB' 'KernelStack: 21792 kB' 'PageTables: 8024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481912 kB' 'Committed_AS: 9571900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217892 kB' 'VmallocChunk: 0 kB' 'Percpu: 99904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2542964 kB' 'DirectMap2M: 50620416 kB' 'DirectMap1G: 16777216 kB' 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.517 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 17:13:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.518 17:13:10 -- setup/common.sh@33 -- # echo 0 00:03:50.518 17:13:10 -- setup/common.sh@33 -- # return 0 00:03:50.518 17:13:10 -- setup/hugepages.sh@99 -- # surp=0 00:03:50.518 17:13:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:50.518 17:13:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.518 17:13:10 -- setup/common.sh@18 -- # local node= 00:03:50.518 17:13:10 -- setup/common.sh@19 -- # local var val 00:03:50.518 17:13:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.519 17:13:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.519 17:13:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.519 17:13:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.519 17:13:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.519 17:13:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 17:13:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283768 kB' 'MemFree: 44001704 kB' 'MemAvailable: 45838060 kB' 'Buffers: 3740 kB' 'Cached: 9891068 kB' 'SwapCached: 32 kB' 'Active: 8881300 kB' 'Inactive: 1533412 kB' 'Active(anon): 8372832 kB' 'Inactive(anon): 17732 kB' 'Active(file): 508468 kB' 'Inactive(file): 1515680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8386044 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523520 kB' 'Mapped: 184052 kB' 'Shmem: 7870660 kB' 'KReclaimable: 484420 kB' 'Slab: 1399084 kB' 'SReclaimable: 484420 kB' 'SUnreclaim: 914664 kB' 'KernelStack: 21808 kB' 'PageTables: 8068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481912 kB' 'Committed_AS: 9571916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217876 kB' 'VmallocChunk: 0 kB' 'Percpu: 99904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2542964 kB' 'DirectMap2M: 50620416 kB' 'DirectMap1G: 16777216 kB' 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.519 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.520 17:13:10 -- setup/common.sh@33 -- # echo 0 00:03:50.520 17:13:10 -- setup/common.sh@33 -- # return 0 00:03:50.520 17:13:10 -- setup/hugepages.sh@100 -- # resv=0 00:03:50.520 17:13:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:50.520 nr_hugepages=1024 00:03:50.520 17:13:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:50.520 resv_hugepages=0 00:03:50.520 17:13:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:50.520 surplus_hugepages=0 00:03:50.520 17:13:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:50.520 anon_hugepages=0 00:03:50.520 17:13:10 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.520 17:13:10 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:50.520 17:13:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:50.520 17:13:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:50.520 17:13:10 -- setup/common.sh@18 -- # local node= 00:03:50.520 17:13:10 -- setup/common.sh@19 -- # local var val 00:03:50.520 17:13:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.520 17:13:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.520 17:13:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.520 17:13:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.520 17:13:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.520 17:13:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 17:13:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283768 kB' 'MemFree: 44001704 kB' 'MemAvailable: 45838060 kB' 'Buffers: 3740 kB' 'Cached: 9891068 kB' 'SwapCached: 32 kB' 'Active: 8881212 kB' 'Inactive: 1533412 kB' 'Active(anon): 8372744 kB' 'Inactive(anon): 17732 kB' 'Active(file): 508468 kB' 'Inactive(file): 1515680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8386044 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523432 kB' 'Mapped: 184052 kB' 'Shmem: 7870660 kB' 'KReclaimable: 484420 kB' 'Slab: 1399084 kB' 'SReclaimable: 484420 kB' 'SUnreclaim: 914664 kB' 'KernelStack: 21808 kB' 'PageTables: 8068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481912 kB' 'Committed_AS: 9571932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217892 kB' 'VmallocChunk: 0 kB' 'Percpu: 99904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2542964 kB' 'DirectMap2M: 50620416 kB' 'DirectMap1G: 16777216 kB' 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.520 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.522 17:13:10 -- setup/common.sh@33 -- # echo 1024 00:03:50.522 17:13:10 -- setup/common.sh@33 -- # return 0 00:03:50.522 17:13:10 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.522 17:13:10 -- setup/hugepages.sh@112 -- # get_nodes 00:03:50.522 17:13:10 -- setup/hugepages.sh@27 -- # local node 00:03:50.522 17:13:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.522 17:13:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:50.522 17:13:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.522 17:13:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:50.522 17:13:10 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:50.522 17:13:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:50.522 17:13:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.522 17:13:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.522 17:13:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:50.522 17:13:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.522 17:13:10 -- setup/common.sh@18 -- # local node=0 00:03:50.522 17:13:10 -- setup/common.sh@19 -- # local var val 00:03:50.522 17:13:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.522 17:13:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.522 17:13:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:50.522 17:13:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:50.522 17:13:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.522 17:13:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 17:13:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 26062736 kB' 'MemUsed: 6522632 kB' 'SwapCached: 32 kB' 'Active: 3911568 kB' 'Inactive: 270204 kB' 'Active(anon): 3530612 kB' 'Inactive(anon): 60 kB' 'Active(file): 380956 kB' 'Inactive(file): 270144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3753276 kB' 'Mapped: 130420 kB' 'AnonPages: 432040 kB' 'Shmem: 3102144 kB' 'KernelStack: 12104 kB' 'PageTables: 5500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 254632 kB' 'Slab: 668884 kB' 'SReclaimable: 254632 kB' 'SUnreclaim: 414252 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.522 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 17:13:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 17:13:10 -- setup/common.sh@33 -- # echo 0 00:03:50.524 17:13:10 -- setup/common.sh@33 -- # return 0 00:03:50.524 17:13:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.524 17:13:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.524 17:13:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.524 17:13:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:50.524 17:13:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.524 17:13:10 -- setup/common.sh@18 -- # local node=1 00:03:50.524 17:13:10 -- setup/common.sh@19 -- # local var val 00:03:50.524 17:13:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.524 17:13:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.524 17:13:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:50.524 17:13:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:50.524 17:13:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.524 17:13:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 17:13:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698400 kB' 'MemFree: 17938908 kB' 'MemUsed: 9759492 kB' 'SwapCached: 0 kB' 'Active: 4969588 kB' 'Inactive: 1263208 kB' 'Active(anon): 4842076 kB' 'Inactive(anon): 17672 kB' 'Active(file): 127512 kB' 'Inactive(file): 1245536 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6141608 kB' 'Mapped: 53632 kB' 'AnonPages: 91272 kB' 'Shmem: 4768560 kB' 'KernelStack: 9704 kB' 'PageTables: 2568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 229788 kB' 'Slab: 730200 kB' 'SReclaimable: 229788 kB' 'SUnreclaim: 500412 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.525 17:13:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.525 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.525 17:13:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.525 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.525 17:13:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.525 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.525 17:13:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.525 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.525 17:13:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.525 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.525 17:13:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.525 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.525 17:13:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.525 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.525 17:13:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.525 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.525 17:13:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.525 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.525 17:13:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.525 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.525 17:13:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.525 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.525 17:13:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.525 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.525 17:13:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.525 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.525 17:13:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.525 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.525 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.525 17:13:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.526 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.526 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.526 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.526 17:13:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.526 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.526 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.526 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.526 17:13:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.526 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.526 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.526 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.526 17:13:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.526 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.526 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.526 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.526 17:13:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.526 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.526 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.526 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.526 17:13:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.526 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.526 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.526 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.526 17:13:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.526 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.526 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.526 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.526 17:13:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.526 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.526 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.526 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.526 17:13:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.526 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.526 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.526 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.526 17:13:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.526 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.526 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.526 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.526 17:13:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.526 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.526 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.526 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.526 17:13:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.526 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.526 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.526 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.526 17:13:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.526 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.526 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.526 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.526 17:13:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.526 17:13:10 -- setup/common.sh@32 -- # continue 00:03:50.526 17:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.526 17:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.526 17:13:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.526 17:13:10 -- setup/common.sh@33 -- # echo 0 00:03:50.526 17:13:10 -- setup/common.sh@33 -- # return 0 00:03:50.526 17:13:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.526 17:13:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.526 17:13:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.526 17:13:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.526 17:13:10 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:50.526 node0=512 expecting 512 00:03:50.526 17:13:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.526 17:13:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.526 17:13:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.526 17:13:10 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:50.526 node1=512 expecting 512 00:03:50.526 17:13:10 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:50.526 00:03:50.526 real 0m3.484s 00:03:50.526 user 0m1.266s 00:03:50.526 sys 0m2.258s 00:03:50.526 17:13:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:50.526 17:13:10 -- common/autotest_common.sh@10 -- # set +x 00:03:50.526 ************************************ 00:03:50.526 END TEST even_2G_alloc 00:03:50.526 ************************************ 00:03:50.526 17:13:10 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:50.526 17:13:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:50.526 17:13:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:50.526 17:13:10 -- common/autotest_common.sh@10 -- # set +x 00:03:50.526 ************************************ 00:03:50.526 START TEST odd_alloc 00:03:50.526 ************************************ 00:03:50.526 17:13:10 -- common/autotest_common.sh@1114 -- # odd_alloc 00:03:50.526 17:13:10 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:50.526 17:13:10 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:50.526 17:13:10 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:50.526 17:13:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:50.526 17:13:10 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:50.526 17:13:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:50.527 17:13:10 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:50.527 17:13:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:50.527 17:13:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:50.527 17:13:10 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:50.527 17:13:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:50.527 17:13:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:50.527 17:13:10 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:50.527 17:13:10 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:50.527 17:13:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.527 17:13:10 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:50.527 17:13:10 -- setup/hugepages.sh@83 -- # : 513 00:03:50.527 17:13:10 -- setup/hugepages.sh@84 -- # : 1 00:03:50.527 17:13:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.527 17:13:10 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:50.527 17:13:10 -- setup/hugepages.sh@83 -- # : 0 00:03:50.527 17:13:10 -- setup/hugepages.sh@84 -- # : 0 00:03:50.527 17:13:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.527 17:13:10 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:50.527 17:13:10 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:50.527 17:13:10 -- setup/hugepages.sh@160 -- # setup output 00:03:50.527 17:13:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.527 17:13:10 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:53.941 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:53.941 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:53.941 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:53.941 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:53.941 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:53.941 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:53.941 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:53.941 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:53.941 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:53.941 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:53.941 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:53.941 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:53.941 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:53.941 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:53.941 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:53.941 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:53.941 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:54.205 17:13:13 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:54.205 17:13:13 -- setup/hugepages.sh@89 -- # local node 00:03:54.205 17:13:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:54.205 17:13:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:54.205 17:13:13 -- setup/hugepages.sh@92 -- # local surp 00:03:54.205 17:13:13 -- setup/hugepages.sh@93 -- # local resv 00:03:54.205 17:13:13 -- setup/hugepages.sh@94 -- # local anon 00:03:54.205 17:13:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:54.205 17:13:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:54.205 17:13:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:54.205 17:13:13 -- setup/common.sh@18 -- # local node= 00:03:54.205 17:13:13 -- setup/common.sh@19 -- # local var val 00:03:54.205 17:13:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:54.205 17:13:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.205 17:13:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.205 17:13:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.205 17:13:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.205 17:13:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.205 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.205 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.205 17:13:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283768 kB' 'MemFree: 44013412 kB' 'MemAvailable: 45849768 kB' 'Buffers: 3740 kB' 'Cached: 9891184 kB' 'SwapCached: 32 kB' 'Active: 8881468 kB' 'Inactive: 1533412 kB' 'Active(anon): 8373000 kB' 'Inactive(anon): 17732 kB' 'Active(file): 508468 kB' 'Inactive(file): 1515680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8386044 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522780 kB' 'Mapped: 184156 kB' 'Shmem: 7870776 kB' 'KReclaimable: 484420 kB' 'Slab: 1398872 kB' 'SReclaimable: 484420 kB' 'SUnreclaim: 914452 kB' 'KernelStack: 22000 kB' 'PageTables: 8856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480888 kB' 'Committed_AS: 9577404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218244 kB' 'VmallocChunk: 0 kB' 'Percpu: 99904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2542964 kB' 'DirectMap2M: 50620416 kB' 'DirectMap1G: 16777216 kB' 00:03:54.205 17:13:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.205 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.205 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.205 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.205 17:13:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.205 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.205 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.205 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.205 17:13:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.205 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.205 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.205 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.205 17:13:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.205 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.205 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.205 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.205 17:13:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.205 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.205 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.205 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.205 17:13:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.205 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.205 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.205 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.205 17:13:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.205 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.205 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.205 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.205 17:13:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.205 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.205 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.205 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.205 17:13:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.205 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.205 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.205 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.205 17:13:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.205 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.205 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.205 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.205 17:13:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.205 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.205 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.205 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.206 17:13:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.206 17:13:13 -- setup/common.sh@33 -- # echo 0 00:03:54.206 17:13:13 -- setup/common.sh@33 -- # return 0 00:03:54.206 17:13:13 -- setup/hugepages.sh@97 -- # anon=0 00:03:54.206 17:13:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:54.206 17:13:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.206 17:13:13 -- setup/common.sh@18 -- # local node= 00:03:54.206 17:13:13 -- setup/common.sh@19 -- # local var val 00:03:54.206 17:13:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:54.206 17:13:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.206 17:13:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.206 17:13:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.206 17:13:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.206 17:13:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.206 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283768 kB' 'MemFree: 44016532 kB' 'MemAvailable: 45852888 kB' 'Buffers: 3740 kB' 'Cached: 9891196 kB' 'SwapCached: 32 kB' 'Active: 8881256 kB' 'Inactive: 1533412 kB' 'Active(anon): 8372788 kB' 'Inactive(anon): 17732 kB' 'Active(file): 508468 kB' 'Inactive(file): 1515680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8386044 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522524 kB' 'Mapped: 184132 kB' 'Shmem: 7870788 kB' 'KReclaimable: 484420 kB' 'Slab: 1398840 kB' 'SReclaimable: 484420 kB' 'SUnreclaim: 914420 kB' 'KernelStack: 21872 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480888 kB' 'Committed_AS: 9577416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218132 kB' 'VmallocChunk: 0 kB' 'Percpu: 99904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2542964 kB' 'DirectMap2M: 50620416 kB' 'DirectMap1G: 16777216 kB' 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.207 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.207 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.208 17:13:13 -- setup/common.sh@33 -- # echo 0 00:03:54.208 17:13:13 -- setup/common.sh@33 -- # return 0 00:03:54.208 17:13:13 -- setup/hugepages.sh@99 -- # surp=0 00:03:54.208 17:13:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:54.208 17:13:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:54.208 17:13:13 -- setup/common.sh@18 -- # local node= 00:03:54.208 17:13:13 -- setup/common.sh@19 -- # local var val 00:03:54.208 17:13:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:54.208 17:13:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.208 17:13:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.208 17:13:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.208 17:13:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.208 17:13:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.208 17:13:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283768 kB' 'MemFree: 44015836 kB' 'MemAvailable: 45852192 kB' 'Buffers: 3740 kB' 'Cached: 9891200 kB' 'SwapCached: 32 kB' 'Active: 8880892 kB' 'Inactive: 1533412 kB' 'Active(anon): 8372424 kB' 'Inactive(anon): 17732 kB' 'Active(file): 508468 kB' 'Inactive(file): 1515680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8386044 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522584 kB' 'Mapped: 184056 kB' 'Shmem: 7870792 kB' 'KReclaimable: 484420 kB' 'Slab: 1398808 kB' 'SReclaimable: 484420 kB' 'SUnreclaim: 914388 kB' 'KernelStack: 21952 kB' 'PageTables: 8288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480888 kB' 'Committed_AS: 9577428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218228 kB' 'VmallocChunk: 0 kB' 'Percpu: 99904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2542964 kB' 'DirectMap2M: 50620416 kB' 'DirectMap1G: 16777216 kB' 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.208 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.208 17:13:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.209 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.209 17:13:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.210 17:13:13 -- setup/common.sh@33 -- # echo 0 00:03:54.210 17:13:13 -- setup/common.sh@33 -- # return 0 00:03:54.210 17:13:13 -- setup/hugepages.sh@100 -- # resv=0 00:03:54.210 17:13:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:54.210 nr_hugepages=1025 00:03:54.210 17:13:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:54.210 resv_hugepages=0 00:03:54.210 17:13:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:54.210 surplus_hugepages=0 00:03:54.210 17:13:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:54.210 anon_hugepages=0 00:03:54.210 17:13:13 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:54.210 17:13:13 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:54.210 17:13:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:54.210 17:13:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:54.210 17:13:13 -- setup/common.sh@18 -- # local node= 00:03:54.210 17:13:13 -- setup/common.sh@19 -- # local var val 00:03:54.210 17:13:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:54.210 17:13:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.210 17:13:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.210 17:13:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.210 17:13:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.210 17:13:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.210 17:13:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283768 kB' 'MemFree: 44015308 kB' 'MemAvailable: 45851664 kB' 'Buffers: 3740 kB' 'Cached: 9891216 kB' 'SwapCached: 32 kB' 'Active: 8881228 kB' 'Inactive: 1533412 kB' 'Active(anon): 8372760 kB' 'Inactive(anon): 17732 kB' 'Active(file): 508468 kB' 'Inactive(file): 1515680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8386044 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523020 kB' 'Mapped: 185064 kB' 'Shmem: 7870808 kB' 'KReclaimable: 484420 kB' 'Slab: 1398796 kB' 'SReclaimable: 484420 kB' 'SUnreclaim: 914376 kB' 'KernelStack: 22064 kB' 'PageTables: 8544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480888 kB' 'Committed_AS: 9580448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218228 kB' 'VmallocChunk: 0 kB' 'Percpu: 99904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2542964 kB' 'DirectMap2M: 50620416 kB' 'DirectMap1G: 16777216 kB' 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.210 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.210 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 17:13:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.212 17:13:13 -- setup/common.sh@33 -- # echo 1025 00:03:54.212 17:13:13 -- setup/common.sh@33 -- # return 0 00:03:54.212 17:13:13 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:54.212 17:13:13 -- setup/hugepages.sh@112 -- # get_nodes 00:03:54.212 17:13:13 -- setup/hugepages.sh@27 -- # local node 00:03:54.212 17:13:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.212 17:13:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:54.212 17:13:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.212 17:13:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:54.212 17:13:13 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:54.212 17:13:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.212 17:13:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.212 17:13:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.212 17:13:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:54.212 17:13:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.212 17:13:13 -- setup/common.sh@18 -- # local node=0 00:03:54.212 17:13:13 -- setup/common.sh@19 -- # local var val 00:03:54.212 17:13:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:54.212 17:13:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.212 17:13:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:54.212 17:13:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:54.212 17:13:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.212 17:13:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.212 17:13:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 26070840 kB' 'MemUsed: 6514528 kB' 'SwapCached: 32 kB' 'Active: 3912048 kB' 'Inactive: 270204 kB' 'Active(anon): 3531092 kB' 'Inactive(anon): 60 kB' 'Active(file): 380956 kB' 'Inactive(file): 270144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3753324 kB' 'Mapped: 130424 kB' 'AnonPages: 432068 kB' 'Shmem: 3102192 kB' 'KernelStack: 12344 kB' 'PageTables: 6224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 254632 kB' 'Slab: 668700 kB' 'SReclaimable: 254632 kB' 'SUnreclaim: 414068 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.212 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.212 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.213 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.213 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.213 17:13:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.213 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.213 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.213 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.213 17:13:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.213 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.213 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.213 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.213 17:13:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.213 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.213 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.213 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.213 17:13:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.213 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.213 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.213 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.213 17:13:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.213 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.213 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.213 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.213 17:13:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.213 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.213 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.213 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.213 17:13:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.213 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.213 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.213 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.213 17:13:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.213 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.213 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.213 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.213 17:13:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.213 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.213 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.213 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.213 17:13:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.213 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.213 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.213 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.213 17:13:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.213 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.213 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.213 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.213 17:13:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.213 17:13:13 -- setup/common.sh@33 -- # echo 0 00:03:54.213 17:13:13 -- setup/common.sh@33 -- # return 0 00:03:54.213 17:13:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.213 17:13:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.213 17:13:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.474 17:13:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:54.474 17:13:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.474 17:13:13 -- setup/common.sh@18 -- # local node=1 00:03:54.474 17:13:13 -- setup/common.sh@19 -- # local var val 00:03:54.474 17:13:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:54.474 17:13:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.474 17:13:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:54.474 17:13:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:54.474 17:13:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.474 17:13:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 17:13:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698400 kB' 'MemFree: 17945488 kB' 'MemUsed: 9752912 kB' 'SwapCached: 0 kB' 'Active: 4969356 kB' 'Inactive: 1263208 kB' 'Active(anon): 4841844 kB' 'Inactive(anon): 17672 kB' 'Active(file): 127512 kB' 'Inactive(file): 1245536 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6141692 kB' 'Mapped: 53632 kB' 'AnonPages: 91064 kB' 'Shmem: 4768644 kB' 'KernelStack: 9640 kB' 'PageTables: 2380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 229788 kB' 'Slab: 730096 kB' 'SReclaimable: 229788 kB' 'SUnreclaim: 500308 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.474 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.474 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 17:13:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.475 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 17:13:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.475 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 17:13:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.475 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 17:13:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.475 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 17:13:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.475 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 17:13:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.475 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 17:13:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.475 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 17:13:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.475 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 17:13:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.475 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 17:13:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.475 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 17:13:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 17:13:13 -- setup/common.sh@32 -- # continue 00:03:54.475 17:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 17:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 17:13:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 17:13:13 -- setup/common.sh@33 -- # echo 0 00:03:54.475 17:13:13 -- setup/common.sh@33 -- # return 0 00:03:54.475 17:13:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.475 17:13:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.475 17:13:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.475 17:13:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.475 17:13:13 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:54.475 node0=512 expecting 513 00:03:54.475 17:13:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.475 17:13:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.475 17:13:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.475 17:13:14 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:54.475 node1=513 expecting 512 00:03:54.475 17:13:14 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:54.475 00:03:54.475 real 0m3.764s 00:03:54.475 user 0m1.439s 00:03:54.475 sys 0m2.398s 00:03:54.475 17:13:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:54.475 17:13:14 -- common/autotest_common.sh@10 -- # set +x 00:03:54.475 ************************************ 00:03:54.475 END TEST odd_alloc 00:03:54.475 ************************************ 00:03:54.475 17:13:14 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:54.475 17:13:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:54.475 17:13:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:54.475 17:13:14 -- common/autotest_common.sh@10 -- # set +x 00:03:54.475 ************************************ 00:03:54.475 START TEST custom_alloc 00:03:54.475 ************************************ 00:03:54.475 17:13:14 -- common/autotest_common.sh@1114 -- # custom_alloc 00:03:54.475 17:13:14 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:54.475 17:13:14 -- setup/hugepages.sh@169 -- # local node 00:03:54.475 17:13:14 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:54.475 17:13:14 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:54.475 17:13:14 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:54.475 17:13:14 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:54.475 17:13:14 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:54.475 17:13:14 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:54.475 17:13:14 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:54.475 17:13:14 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:54.475 17:13:14 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:54.475 17:13:14 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:54.475 17:13:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:54.475 17:13:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:54.475 17:13:14 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:54.475 17:13:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:54.475 17:13:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:54.475 17:13:14 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:54.475 17:13:14 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:54.475 17:13:14 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.475 17:13:14 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:54.475 17:13:14 -- setup/hugepages.sh@83 -- # : 256 00:03:54.475 17:13:14 -- setup/hugepages.sh@84 -- # : 1 00:03:54.475 17:13:14 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.475 17:13:14 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:54.475 17:13:14 -- setup/hugepages.sh@83 -- # : 0 00:03:54.475 17:13:14 -- setup/hugepages.sh@84 -- # : 0 00:03:54.475 17:13:14 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.475 17:13:14 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:54.475 17:13:14 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:54.475 17:13:14 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:54.475 17:13:14 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:54.475 17:13:14 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:54.475 17:13:14 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:54.475 17:13:14 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:54.475 17:13:14 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:54.475 17:13:14 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:54.475 17:13:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:54.475 17:13:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:54.475 17:13:14 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:54.475 17:13:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:54.475 17:13:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:54.475 17:13:14 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:54.475 17:13:14 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:54.475 17:13:14 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:54.475 17:13:14 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:54.475 17:13:14 -- setup/hugepages.sh@78 -- # return 0 00:03:54.475 17:13:14 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:54.475 17:13:14 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:54.475 17:13:14 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:54.475 17:13:14 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:54.475 17:13:14 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:54.475 17:13:14 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:54.475 17:13:14 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:54.475 17:13:14 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:54.475 17:13:14 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:54.475 17:13:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:54.475 17:13:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:54.475 17:13:14 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:54.475 17:13:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:54.475 17:13:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:54.475 17:13:14 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:54.475 17:13:14 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:54.475 17:13:14 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:54.476 17:13:14 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:54.476 17:13:14 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:54.476 17:13:14 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:54.476 17:13:14 -- setup/hugepages.sh@78 -- # return 0 00:03:54.476 17:13:14 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:54.476 17:13:14 -- setup/hugepages.sh@187 -- # setup output 00:03:54.476 17:13:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.476 17:13:14 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:57.916 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:57.916 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:57.916 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:57.916 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:57.916 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:57.916 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:57.916 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:57.916 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:57.916 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:57.916 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:57.916 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:57.916 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:57.916 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:57.916 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:57.916 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:57.916 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:57.916 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:57.916 17:13:17 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:57.916 17:13:17 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:57.916 17:13:17 -- setup/hugepages.sh@89 -- # local node 00:03:57.916 17:13:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.916 17:13:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.916 17:13:17 -- setup/hugepages.sh@92 -- # local surp 00:03:57.916 17:13:17 -- setup/hugepages.sh@93 -- # local resv 00:03:57.916 17:13:17 -- setup/hugepages.sh@94 -- # local anon 00:03:57.916 17:13:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.916 17:13:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.916 17:13:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.916 17:13:17 -- setup/common.sh@18 -- # local node= 00:03:57.916 17:13:17 -- setup/common.sh@19 -- # local var val 00:03:57.916 17:13:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:57.916 17:13:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.916 17:13:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.916 17:13:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.916 17:13:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.916 17:13:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.916 17:13:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283768 kB' 'MemFree: 43000132 kB' 'MemAvailable: 44836488 kB' 'Buffers: 3740 kB' 'Cached: 9891316 kB' 'SwapCached: 32 kB' 'Active: 8882256 kB' 'Inactive: 1533412 kB' 'Active(anon): 8373788 kB' 'Inactive(anon): 17732 kB' 'Active(file): 508468 kB' 'Inactive(file): 1515680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8386044 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523428 kB' 'Mapped: 184164 kB' 'Shmem: 7870908 kB' 'KReclaimable: 484420 kB' 'Slab: 1399360 kB' 'SReclaimable: 484420 kB' 'SUnreclaim: 914940 kB' 'KernelStack: 21808 kB' 'PageTables: 8172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957624 kB' 'Committed_AS: 9573324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218020 kB' 'VmallocChunk: 0 kB' 'Percpu: 99904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2542964 kB' 'DirectMap2M: 50620416 kB' 'DirectMap1G: 16777216 kB' 00:03:57.916 17:13:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.916 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.916 17:13:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.916 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.916 17:13:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.916 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.916 17:13:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.916 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.916 17:13:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.916 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.916 17:13:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.916 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.916 17:13:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.916 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.916 17:13:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.916 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.916 17:13:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.916 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.916 17:13:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.916 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.916 17:13:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.916 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.916 17:13:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.916 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.916 17:13:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.916 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.916 17:13:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.916 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.916 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.917 17:13:17 -- setup/common.sh@33 -- # echo 0 00:03:57.917 17:13:17 -- setup/common.sh@33 -- # return 0 00:03:57.917 17:13:17 -- setup/hugepages.sh@97 -- # anon=0 00:03:57.917 17:13:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.917 17:13:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.917 17:13:17 -- setup/common.sh@18 -- # local node= 00:03:57.917 17:13:17 -- setup/common.sh@19 -- # local var val 00:03:57.917 17:13:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:57.917 17:13:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.917 17:13:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.917 17:13:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.917 17:13:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.917 17:13:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283768 kB' 'MemFree: 42997832 kB' 'MemAvailable: 44834188 kB' 'Buffers: 3740 kB' 'Cached: 9891320 kB' 'SwapCached: 32 kB' 'Active: 8882756 kB' 'Inactive: 1533412 kB' 'Active(anon): 8374288 kB' 'Inactive(anon): 17732 kB' 'Active(file): 508468 kB' 'Inactive(file): 1515680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8386044 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523964 kB' 'Mapped: 184648 kB' 'Shmem: 7870912 kB' 'KReclaimable: 484420 kB' 'Slab: 1399360 kB' 'SReclaimable: 484420 kB' 'SUnreclaim: 914940 kB' 'KernelStack: 21776 kB' 'PageTables: 8072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957624 kB' 'Committed_AS: 9587640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217988 kB' 'VmallocChunk: 0 kB' 'Percpu: 99904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2542964 kB' 'DirectMap2M: 50620416 kB' 'DirectMap1G: 16777216 kB' 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.917 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.917 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.918 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.918 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.919 17:13:17 -- setup/common.sh@33 -- # echo 0 00:03:57.919 17:13:17 -- setup/common.sh@33 -- # return 0 00:03:57.919 17:13:17 -- setup/hugepages.sh@99 -- # surp=0 00:03:57.919 17:13:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:57.919 17:13:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.919 17:13:17 -- setup/common.sh@18 -- # local node= 00:03:57.919 17:13:17 -- setup/common.sh@19 -- # local var val 00:03:57.919 17:13:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:57.919 17:13:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.919 17:13:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.919 17:13:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.919 17:13:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.919 17:13:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283768 kB' 'MemFree: 42991928 kB' 'MemAvailable: 44828284 kB' 'Buffers: 3740 kB' 'Cached: 9891332 kB' 'SwapCached: 32 kB' 'Active: 8886848 kB' 'Inactive: 1533412 kB' 'Active(anon): 8378380 kB' 'Inactive(anon): 17732 kB' 'Active(file): 508468 kB' 'Inactive(file): 1515680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8386044 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528532 kB' 'Mapped: 184828 kB' 'Shmem: 7870924 kB' 'KReclaimable: 484420 kB' 'Slab: 1399296 kB' 'SReclaimable: 484420 kB' 'SUnreclaim: 914876 kB' 'KernelStack: 21744 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957624 kB' 'Committed_AS: 9579108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217928 kB' 'VmallocChunk: 0 kB' 'Percpu: 99904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2542964 kB' 'DirectMap2M: 50620416 kB' 'DirectMap1G: 16777216 kB' 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.919 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.919 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.920 17:13:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.920 17:13:17 -- setup/common.sh@33 -- # echo 0 00:03:57.920 17:13:17 -- setup/common.sh@33 -- # return 0 00:03:57.920 17:13:17 -- setup/hugepages.sh@100 -- # resv=0 00:03:57.920 17:13:17 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:57.920 nr_hugepages=1536 00:03:57.920 17:13:17 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:57.920 resv_hugepages=0 00:03:57.920 17:13:17 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:57.920 surplus_hugepages=0 00:03:57.920 17:13:17 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:57.920 anon_hugepages=0 00:03:57.920 17:13:17 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:57.920 17:13:17 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:57.920 17:13:17 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:57.920 17:13:17 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:57.920 17:13:17 -- setup/common.sh@18 -- # local node= 00:03:57.920 17:13:17 -- setup/common.sh@19 -- # local var val 00:03:57.920 17:13:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:57.920 17:13:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.920 17:13:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.920 17:13:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.920 17:13:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.920 17:13:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.920 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.921 17:13:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283768 kB' 'MemFree: 42998032 kB' 'MemAvailable: 44834388 kB' 'Buffers: 3740 kB' 'Cached: 9891352 kB' 'SwapCached: 32 kB' 'Active: 8881832 kB' 'Inactive: 1533412 kB' 'Active(anon): 8373364 kB' 'Inactive(anon): 17732 kB' 'Active(file): 508468 kB' 'Inactive(file): 1515680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8386044 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523472 kB' 'Mapped: 184568 kB' 'Shmem: 7870944 kB' 'KReclaimable: 484420 kB' 'Slab: 1399296 kB' 'SReclaimable: 484420 kB' 'SUnreclaim: 914876 kB' 'KernelStack: 21728 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957624 kB' 'Committed_AS: 9574888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217924 kB' 'VmallocChunk: 0 kB' 'Percpu: 99904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2542964 kB' 'DirectMap2M: 50620416 kB' 'DirectMap1G: 16777216 kB' 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.921 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.921 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.922 17:13:17 -- setup/common.sh@33 -- # echo 1536 00:03:57.922 17:13:17 -- setup/common.sh@33 -- # return 0 00:03:57.922 17:13:17 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:57.922 17:13:17 -- setup/hugepages.sh@112 -- # get_nodes 00:03:57.922 17:13:17 -- setup/hugepages.sh@27 -- # local node 00:03:57.922 17:13:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.922 17:13:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:57.922 17:13:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.922 17:13:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:57.922 17:13:17 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:57.922 17:13:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.922 17:13:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.922 17:13:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.922 17:13:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:57.922 17:13:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.922 17:13:17 -- setup/common.sh@18 -- # local node=0 00:03:57.922 17:13:17 -- setup/common.sh@19 -- # local var val 00:03:57.922 17:13:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:57.922 17:13:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.922 17:13:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.922 17:13:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.922 17:13:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.922 17:13:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 26070268 kB' 'MemUsed: 6515100 kB' 'SwapCached: 32 kB' 'Active: 3915048 kB' 'Inactive: 270204 kB' 'Active(anon): 3534092 kB' 'Inactive(anon): 60 kB' 'Active(file): 380956 kB' 'Inactive(file): 270144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3753376 kB' 'Mapped: 130936 kB' 'AnonPages: 435108 kB' 'Shmem: 3102244 kB' 'KernelStack: 12024 kB' 'PageTables: 5296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 254632 kB' 'Slab: 669048 kB' 'SReclaimable: 254632 kB' 'SUnreclaim: 414416 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.922 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.922 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@33 -- # echo 0 00:03:57.923 17:13:17 -- setup/common.sh@33 -- # return 0 00:03:57.923 17:13:17 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.923 17:13:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.923 17:13:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.923 17:13:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:57.923 17:13:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.923 17:13:17 -- setup/common.sh@18 -- # local node=1 00:03:57.923 17:13:17 -- setup/common.sh@19 -- # local var val 00:03:57.923 17:13:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:57.923 17:13:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.923 17:13:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:57.923 17:13:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:57.923 17:13:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.923 17:13:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698400 kB' 'MemFree: 16925284 kB' 'MemUsed: 10773116 kB' 'SwapCached: 0 kB' 'Active: 4969596 kB' 'Inactive: 1263208 kB' 'Active(anon): 4842084 kB' 'Inactive(anon): 17672 kB' 'Active(file): 127512 kB' 'Inactive(file): 1245536 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6141752 kB' 'Mapped: 53632 kB' 'AnonPages: 91184 kB' 'Shmem: 4768704 kB' 'KernelStack: 9688 kB' 'PageTables: 2520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 229788 kB' 'Slab: 730248 kB' 'SReclaimable: 229788 kB' 'SUnreclaim: 500460 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.923 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.923 17:13:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # continue 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.924 17:13:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.924 17:13:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.924 17:13:17 -- setup/common.sh@33 -- # echo 0 00:03:57.924 17:13:17 -- setup/common.sh@33 -- # return 0 00:03:57.924 17:13:17 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.924 17:13:17 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.924 17:13:17 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.924 17:13:17 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.924 17:13:17 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:57.924 node0=512 expecting 512 00:03:57.924 17:13:17 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.924 17:13:17 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.924 17:13:17 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.924 17:13:17 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:57.924 node1=1024 expecting 1024 00:03:57.924 17:13:17 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:57.924 00:03:57.924 real 0m3.581s 00:03:57.924 user 0m1.333s 00:03:57.924 sys 0m2.262s 00:03:57.924 17:13:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:57.924 17:13:17 -- common/autotest_common.sh@10 -- # set +x 00:03:57.924 ************************************ 00:03:57.924 END TEST custom_alloc 00:03:57.924 ************************************ 00:03:57.924 17:13:17 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:57.924 17:13:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:57.924 17:13:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:57.924 17:13:17 -- common/autotest_common.sh@10 -- # set +x 00:03:57.924 ************************************ 00:03:57.924 START TEST no_shrink_alloc 00:03:57.924 ************************************ 00:03:57.925 17:13:17 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:03:57.925 17:13:17 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:58.184 17:13:17 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:58.184 17:13:17 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:58.184 17:13:17 -- setup/hugepages.sh@51 -- # shift 00:03:58.184 17:13:17 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:58.184 17:13:17 -- setup/hugepages.sh@52 -- # local node_ids 00:03:58.184 17:13:17 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:58.184 17:13:17 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:58.184 17:13:17 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:58.184 17:13:17 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:58.184 17:13:17 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.184 17:13:17 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:58.184 17:13:17 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:58.184 17:13:17 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.184 17:13:17 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.184 17:13:17 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:58.184 17:13:17 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:58.184 17:13:17 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:58.184 17:13:17 -- setup/hugepages.sh@73 -- # return 0 00:03:58.184 17:13:17 -- setup/hugepages.sh@198 -- # setup output 00:03:58.184 17:13:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.184 17:13:17 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:01.479 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:01.479 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:01.479 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:01.479 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:01.479 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:01.479 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:01.479 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:01.479 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:01.479 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:01.479 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:01.479 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:01.479 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:01.479 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:01.479 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:01.479 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:01.479 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:01.479 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:01.479 17:13:21 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:01.479 17:13:21 -- setup/hugepages.sh@89 -- # local node 00:04:01.479 17:13:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.479 17:13:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.479 17:13:21 -- setup/hugepages.sh@92 -- # local surp 00:04:01.479 17:13:21 -- setup/hugepages.sh@93 -- # local resv 00:04:01.479 17:13:21 -- setup/hugepages.sh@94 -- # local anon 00:04:01.479 17:13:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.479 17:13:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.479 17:13:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.479 17:13:21 -- setup/common.sh@18 -- # local node= 00:04:01.479 17:13:21 -- setup/common.sh@19 -- # local var val 00:04:01.479 17:13:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.479 17:13:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.479 17:13:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.479 17:13:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.479 17:13:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.479 17:13:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.479 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.479 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.479 17:13:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283768 kB' 'MemFree: 44014936 kB' 'MemAvailable: 45851260 kB' 'Buffers: 3740 kB' 'Cached: 9891456 kB' 'SwapCached: 32 kB' 'Active: 8881096 kB' 'Inactive: 1533412 kB' 'Active(anon): 8372628 kB' 'Inactive(anon): 17732 kB' 'Active(file): 508468 kB' 'Inactive(file): 1515680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8386044 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522812 kB' 'Mapped: 184176 kB' 'Shmem: 7871048 kB' 'KReclaimable: 484388 kB' 'Slab: 1399500 kB' 'SReclaimable: 484388 kB' 'SUnreclaim: 915112 kB' 'KernelStack: 21792 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481912 kB' 'Committed_AS: 9573752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217940 kB' 'VmallocChunk: 0 kB' 'Percpu: 99904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2542964 kB' 'DirectMap2M: 50620416 kB' 'DirectMap1G: 16777216 kB' 00:04:01.479 17:13:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.479 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.479 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.479 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.479 17:13:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.479 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.479 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.479 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.479 17:13:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.479 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.479 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.479 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.479 17:13:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.479 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.479 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.479 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.479 17:13:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.479 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.479 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.479 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.479 17:13:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.479 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.479 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.479 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.479 17:13:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.479 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.479 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.479 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.479 17:13:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.479 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.479 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.479 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.479 17:13:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.479 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.479 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.479 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.479 17:13:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.479 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.479 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.479 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.479 17:13:21 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.479 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.479 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.479 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.479 17:13:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.479 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.479 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.479 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.479 17:13:21 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.479 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.480 17:13:21 -- setup/common.sh@33 -- # echo 0 00:04:01.480 17:13:21 -- setup/common.sh@33 -- # return 0 00:04:01.480 17:13:21 -- setup/hugepages.sh@97 -- # anon=0 00:04:01.480 17:13:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.480 17:13:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.480 17:13:21 -- setup/common.sh@18 -- # local node= 00:04:01.480 17:13:21 -- setup/common.sh@19 -- # local var val 00:04:01.480 17:13:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.480 17:13:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.480 17:13:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.480 17:13:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.480 17:13:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.480 17:13:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283768 kB' 'MemFree: 44016060 kB' 'MemAvailable: 45852384 kB' 'Buffers: 3740 kB' 'Cached: 9891460 kB' 'SwapCached: 32 kB' 'Active: 8881212 kB' 'Inactive: 1533412 kB' 'Active(anon): 8372744 kB' 'Inactive(anon): 17732 kB' 'Active(file): 508468 kB' 'Inactive(file): 1515680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8386044 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522876 kB' 'Mapped: 184144 kB' 'Shmem: 7871052 kB' 'KReclaimable: 484388 kB' 'Slab: 1399484 kB' 'SReclaimable: 484388 kB' 'SUnreclaim: 915096 kB' 'KernelStack: 21776 kB' 'PageTables: 8072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481912 kB' 'Committed_AS: 9574260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217956 kB' 'VmallocChunk: 0 kB' 'Percpu: 99904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2542964 kB' 'DirectMap2M: 50620416 kB' 'DirectMap1G: 16777216 kB' 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.480 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.480 17:13:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.481 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.481 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.482 17:13:21 -- setup/common.sh@33 -- # echo 0 00:04:01.482 17:13:21 -- setup/common.sh@33 -- # return 0 00:04:01.482 17:13:21 -- setup/hugepages.sh@99 -- # surp=0 00:04:01.482 17:13:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.482 17:13:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.482 17:13:21 -- setup/common.sh@18 -- # local node= 00:04:01.482 17:13:21 -- setup/common.sh@19 -- # local var val 00:04:01.482 17:13:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.482 17:13:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.482 17:13:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.482 17:13:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.482 17:13:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.482 17:13:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283768 kB' 'MemFree: 44016876 kB' 'MemAvailable: 45853200 kB' 'Buffers: 3740 kB' 'Cached: 9891460 kB' 'SwapCached: 32 kB' 'Active: 8880964 kB' 'Inactive: 1533412 kB' 'Active(anon): 8372496 kB' 'Inactive(anon): 17732 kB' 'Active(file): 508468 kB' 'Inactive(file): 1515680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8386044 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522676 kB' 'Mapped: 184144 kB' 'Shmem: 7871052 kB' 'KReclaimable: 484388 kB' 'Slab: 1399476 kB' 'SReclaimable: 484388 kB' 'SUnreclaim: 915088 kB' 'KernelStack: 21776 kB' 'PageTables: 8072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481912 kB' 'Committed_AS: 9574272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217924 kB' 'VmallocChunk: 0 kB' 'Percpu: 99904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2542964 kB' 'DirectMap2M: 50620416 kB' 'DirectMap1G: 16777216 kB' 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.483 17:13:21 -- setup/common.sh@33 -- # echo 0 00:04:01.483 17:13:21 -- setup/common.sh@33 -- # return 0 00:04:01.483 17:13:21 -- setup/hugepages.sh@100 -- # resv=0 00:04:01.483 17:13:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:01.483 nr_hugepages=1024 00:04:01.483 17:13:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:01.483 resv_hugepages=0 00:04:01.483 17:13:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:01.483 surplus_hugepages=0 00:04:01.483 17:13:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:01.483 anon_hugepages=0 00:04:01.483 17:13:21 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.483 17:13:21 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:01.483 17:13:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:01.483 17:13:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.483 17:13:21 -- setup/common.sh@18 -- # local node= 00:04:01.483 17:13:21 -- setup/common.sh@19 -- # local var val 00:04:01.483 17:13:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.483 17:13:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.483 17:13:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.483 17:13:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.483 17:13:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.483 17:13:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 17:13:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283768 kB' 'MemFree: 44016896 kB' 'MemAvailable: 45853220 kB' 'Buffers: 3740 kB' 'Cached: 9891500 kB' 'SwapCached: 32 kB' 'Active: 8880712 kB' 'Inactive: 1533412 kB' 'Active(anon): 8372244 kB' 'Inactive(anon): 17732 kB' 'Active(file): 508468 kB' 'Inactive(file): 1515680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8386044 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522292 kB' 'Mapped: 184068 kB' 'Shmem: 7871092 kB' 'KReclaimable: 484388 kB' 'Slab: 1399464 kB' 'SReclaimable: 484388 kB' 'SUnreclaim: 915076 kB' 'KernelStack: 21760 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481912 kB' 'Committed_AS: 9574288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217924 kB' 'VmallocChunk: 0 kB' 'Percpu: 99904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2542964 kB' 'DirectMap2M: 50620416 kB' 'DirectMap1G: 16777216 kB' 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 17:13:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 17:13:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.485 17:13:21 -- setup/common.sh@33 -- # echo 1024 00:04:01.485 17:13:21 -- setup/common.sh@33 -- # return 0 00:04:01.485 17:13:21 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.485 17:13:21 -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.485 17:13:21 -- setup/hugepages.sh@27 -- # local node 00:04:01.485 17:13:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.485 17:13:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:01.485 17:13:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.485 17:13:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:01.485 17:13:21 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:01.485 17:13:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.485 17:13:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.485 17:13:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.485 17:13:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.485 17:13:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.485 17:13:21 -- setup/common.sh@18 -- # local node=0 00:04:01.485 17:13:21 -- setup/common.sh@19 -- # local var val 00:04:01.485 17:13:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.485 17:13:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.485 17:13:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.485 17:13:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.485 17:13:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.485 17:13:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.485 17:13:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 25017124 kB' 'MemUsed: 7568244 kB' 'SwapCached: 32 kB' 'Active: 3911484 kB' 'Inactive: 270204 kB' 'Active(anon): 3530528 kB' 'Inactive(anon): 60 kB' 'Active(file): 380956 kB' 'Inactive(file): 270144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3753424 kB' 'Mapped: 130436 kB' 'AnonPages: 431584 kB' 'Shmem: 3102292 kB' 'KernelStack: 12072 kB' 'PageTables: 5504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 254632 kB' 'Slab: 669148 kB' 'SReclaimable: 254632 kB' 'SUnreclaim: 414516 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.485 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.486 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.486 17:13:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.486 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.486 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.486 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.486 17:13:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.486 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.486 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.486 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.486 17:13:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.486 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.486 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.486 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.486 17:13:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.486 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.486 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.486 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.486 17:13:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.486 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.486 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.486 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.486 17:13:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.486 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.486 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.486 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.486 17:13:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.486 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.486 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.486 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.486 17:13:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.486 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.486 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.486 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.486 17:13:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.486 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.486 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.486 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.486 17:13:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.486 17:13:21 -- setup/common.sh@32 -- # continue 00:04:01.486 17:13:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.486 17:13:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.486 17:13:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.486 17:13:21 -- setup/common.sh@33 -- # echo 0 00:04:01.486 17:13:21 -- setup/common.sh@33 -- # return 0 00:04:01.486 17:13:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.486 17:13:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.486 17:13:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.745 17:13:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.745 17:13:21 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:01.745 node0=1024 expecting 1024 00:04:01.745 17:13:21 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:01.745 17:13:21 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:01.745 17:13:21 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:01.745 17:13:21 -- setup/hugepages.sh@202 -- # setup output 00:04:01.745 17:13:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.745 17:13:21 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:05.042 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:05.042 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:05.042 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:05.042 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:05.042 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:05.042 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:05.042 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:05.042 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:05.042 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:05.042 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:05.042 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:05.042 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:05.042 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:05.042 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:05.042 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:05.042 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:05.042 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:05.042 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:05.042 17:13:24 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:05.042 17:13:24 -- setup/hugepages.sh@89 -- # local node 00:04:05.042 17:13:24 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.042 17:13:24 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.042 17:13:24 -- setup/hugepages.sh@92 -- # local surp 00:04:05.042 17:13:24 -- setup/hugepages.sh@93 -- # local resv 00:04:05.042 17:13:24 -- setup/hugepages.sh@94 -- # local anon 00:04:05.042 17:13:24 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.042 17:13:24 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.042 17:13:24 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.042 17:13:24 -- setup/common.sh@18 -- # local node= 00:04:05.042 17:13:24 -- setup/common.sh@19 -- # local var val 00:04:05.042 17:13:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.042 17:13:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.042 17:13:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.042 17:13:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.042 17:13:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.042 17:13:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.042 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.042 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.042 17:13:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283768 kB' 'MemFree: 44010028 kB' 'MemAvailable: 45846352 kB' 'Buffers: 3740 kB' 'Cached: 9891580 kB' 'SwapCached: 32 kB' 'Active: 8882844 kB' 'Inactive: 1533412 kB' 'Active(anon): 8374376 kB' 'Inactive(anon): 17732 kB' 'Active(file): 508468 kB' 'Inactive(file): 1515680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8386044 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524280 kB' 'Mapped: 184160 kB' 'Shmem: 7871172 kB' 'KReclaimable: 484388 kB' 'Slab: 1399080 kB' 'SReclaimable: 484388 kB' 'SUnreclaim: 914692 kB' 'KernelStack: 21920 kB' 'PageTables: 8328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481912 kB' 'Committed_AS: 9579260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218164 kB' 'VmallocChunk: 0 kB' 'Percpu: 99904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2542964 kB' 'DirectMap2M: 50620416 kB' 'DirectMap1G: 16777216 kB' 00:04:05.042 17:13:24 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.042 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.042 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.042 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.042 17:13:24 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.042 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.043 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.043 17:13:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.043 17:13:24 -- setup/common.sh@33 -- # echo 0 00:04:05.043 17:13:24 -- setup/common.sh@33 -- # return 0 00:04:05.043 17:13:24 -- setup/hugepages.sh@97 -- # anon=0 00:04:05.043 17:13:24 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.043 17:13:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.043 17:13:24 -- setup/common.sh@18 -- # local node= 00:04:05.043 17:13:24 -- setup/common.sh@19 -- # local var val 00:04:05.043 17:13:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.043 17:13:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.043 17:13:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.043 17:13:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.044 17:13:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.044 17:13:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283768 kB' 'MemFree: 44010668 kB' 'MemAvailable: 45846992 kB' 'Buffers: 3740 kB' 'Cached: 9891584 kB' 'SwapCached: 32 kB' 'Active: 8882844 kB' 'Inactive: 1533412 kB' 'Active(anon): 8374376 kB' 'Inactive(anon): 17732 kB' 'Active(file): 508468 kB' 'Inactive(file): 1515680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8386044 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524260 kB' 'Mapped: 184148 kB' 'Shmem: 7871176 kB' 'KReclaimable: 484388 kB' 'Slab: 1399104 kB' 'SReclaimable: 484388 kB' 'SUnreclaim: 914716 kB' 'KernelStack: 22096 kB' 'PageTables: 8740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481912 kB' 'Committed_AS: 9579452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218132 kB' 'VmallocChunk: 0 kB' 'Percpu: 99904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2542964 kB' 'DirectMap2M: 50620416 kB' 'DirectMap1G: 16777216 kB' 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.044 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.044 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.045 17:13:24 -- setup/common.sh@33 -- # echo 0 00:04:05.045 17:13:24 -- setup/common.sh@33 -- # return 0 00:04:05.045 17:13:24 -- setup/hugepages.sh@99 -- # surp=0 00:04:05.045 17:13:24 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.045 17:13:24 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.045 17:13:24 -- setup/common.sh@18 -- # local node= 00:04:05.045 17:13:24 -- setup/common.sh@19 -- # local var val 00:04:05.045 17:13:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.045 17:13:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.045 17:13:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.045 17:13:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.045 17:13:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.045 17:13:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283768 kB' 'MemFree: 44010600 kB' 'MemAvailable: 45846924 kB' 'Buffers: 3740 kB' 'Cached: 9891596 kB' 'SwapCached: 32 kB' 'Active: 8883184 kB' 'Inactive: 1533412 kB' 'Active(anon): 8374716 kB' 'Inactive(anon): 17732 kB' 'Active(file): 508468 kB' 'Inactive(file): 1515680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8386044 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524524 kB' 'Mapped: 184072 kB' 'Shmem: 7871188 kB' 'KReclaimable: 484388 kB' 'Slab: 1399072 kB' 'SReclaimable: 484388 kB' 'SUnreclaim: 914684 kB' 'KernelStack: 22048 kB' 'PageTables: 8908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481912 kB' 'Committed_AS: 9579216 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218228 kB' 'VmallocChunk: 0 kB' 'Percpu: 99904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2542964 kB' 'DirectMap2M: 50620416 kB' 'DirectMap1G: 16777216 kB' 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.045 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.045 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.046 17:13:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.046 17:13:24 -- setup/common.sh@33 -- # echo 0 00:04:05.046 17:13:24 -- setup/common.sh@33 -- # return 0 00:04:05.046 17:13:24 -- setup/hugepages.sh@100 -- # resv=0 00:04:05.046 17:13:24 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:05.046 nr_hugepages=1024 00:04:05.046 17:13:24 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.046 resv_hugepages=0 00:04:05.046 17:13:24 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.046 surplus_hugepages=0 00:04:05.046 17:13:24 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.046 anon_hugepages=0 00:04:05.046 17:13:24 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.046 17:13:24 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:05.046 17:13:24 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.046 17:13:24 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.046 17:13:24 -- setup/common.sh@18 -- # local node= 00:04:05.046 17:13:24 -- setup/common.sh@19 -- # local var val 00:04:05.046 17:13:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.046 17:13:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.046 17:13:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.046 17:13:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.046 17:13:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.046 17:13:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.046 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283768 kB' 'MemFree: 44011072 kB' 'MemAvailable: 45847396 kB' 'Buffers: 3740 kB' 'Cached: 9891612 kB' 'SwapCached: 32 kB' 'Active: 8883256 kB' 'Inactive: 1533412 kB' 'Active(anon): 8374788 kB' 'Inactive(anon): 17732 kB' 'Active(file): 508468 kB' 'Inactive(file): 1515680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8386044 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524572 kB' 'Mapped: 184072 kB' 'Shmem: 7871204 kB' 'KReclaimable: 484388 kB' 'Slab: 1399072 kB' 'SReclaimable: 484388 kB' 'SUnreclaim: 914684 kB' 'KernelStack: 22016 kB' 'PageTables: 8772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481912 kB' 'Committed_AS: 9579480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218164 kB' 'VmallocChunk: 0 kB' 'Percpu: 99904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2542964 kB' 'DirectMap2M: 50620416 kB' 'DirectMap1G: 16777216 kB' 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.047 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.047 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.048 17:13:24 -- setup/common.sh@33 -- # echo 1024 00:04:05.048 17:13:24 -- setup/common.sh@33 -- # return 0 00:04:05.048 17:13:24 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.048 17:13:24 -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.048 17:13:24 -- setup/hugepages.sh@27 -- # local node 00:04:05.048 17:13:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.048 17:13:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:05.048 17:13:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.048 17:13:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:05.048 17:13:24 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:05.048 17:13:24 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.048 17:13:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.048 17:13:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.048 17:13:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.048 17:13:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.048 17:13:24 -- setup/common.sh@18 -- # local node=0 00:04:05.048 17:13:24 -- setup/common.sh@19 -- # local var val 00:04:05.048 17:13:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.048 17:13:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.048 17:13:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.048 17:13:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.048 17:13:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.048 17:13:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 25025572 kB' 'MemUsed: 7559796 kB' 'SwapCached: 32 kB' 'Active: 3913392 kB' 'Inactive: 270204 kB' 'Active(anon): 3532436 kB' 'Inactive(anon): 60 kB' 'Active(file): 380956 kB' 'Inactive(file): 270144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3753480 kB' 'Mapped: 130440 kB' 'AnonPages: 433280 kB' 'Shmem: 3102348 kB' 'KernelStack: 12520 kB' 'PageTables: 6716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 254632 kB' 'Slab: 668716 kB' 'SReclaimable: 254632 kB' 'SUnreclaim: 414084 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.048 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.048 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # continue 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.049 17:13:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.049 17:13:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.049 17:13:24 -- setup/common.sh@33 -- # echo 0 00:04:05.049 17:13:24 -- setup/common.sh@33 -- # return 0 00:04:05.049 17:13:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.049 17:13:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.049 17:13:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.049 17:13:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.049 17:13:24 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:05.049 node0=1024 expecting 1024 00:04:05.049 17:13:24 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:05.049 00:04:05.049 real 0m7.106s 00:04:05.049 user 0m2.639s 00:04:05.049 sys 0m4.567s 00:04:05.049 17:13:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:05.049 17:13:24 -- common/autotest_common.sh@10 -- # set +x 00:04:05.049 ************************************ 00:04:05.049 END TEST no_shrink_alloc 00:04:05.049 ************************************ 00:04:05.309 17:13:24 -- setup/hugepages.sh@217 -- # clear_hp 00:04:05.309 17:13:24 -- setup/hugepages.sh@37 -- # local node hp 00:04:05.309 17:13:24 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:05.309 17:13:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.309 17:13:24 -- setup/hugepages.sh@41 -- # echo 0 00:04:05.309 17:13:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.309 17:13:24 -- setup/hugepages.sh@41 -- # echo 0 00:04:05.309 17:13:24 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:05.309 17:13:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.309 17:13:24 -- setup/hugepages.sh@41 -- # echo 0 00:04:05.309 17:13:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.309 17:13:24 -- setup/hugepages.sh@41 -- # echo 0 00:04:05.309 17:13:24 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:05.309 17:13:24 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:05.309 00:04:05.309 real 0m27.378s 00:04:05.309 user 0m9.677s 00:04:05.309 sys 0m16.570s 00:04:05.309 17:13:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:05.309 17:13:24 -- common/autotest_common.sh@10 -- # set +x 00:04:05.309 ************************************ 00:04:05.309 END TEST hugepages 00:04:05.309 ************************************ 00:04:05.309 17:13:24 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:05.309 17:13:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:05.309 17:13:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:05.309 17:13:24 -- common/autotest_common.sh@10 -- # set +x 00:04:05.309 ************************************ 00:04:05.309 START TEST driver 00:04:05.309 ************************************ 00:04:05.309 17:13:24 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:05.309 * Looking for test storage... 00:04:05.309 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:05.309 17:13:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:05.309 17:13:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:05.309 17:13:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:05.309 17:13:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:05.309 17:13:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:05.309 17:13:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:05.309 17:13:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:05.309 17:13:25 -- scripts/common.sh@335 -- # IFS=.-: 00:04:05.309 17:13:25 -- scripts/common.sh@335 -- # read -ra ver1 00:04:05.309 17:13:25 -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.309 17:13:25 -- scripts/common.sh@336 -- # read -ra ver2 00:04:05.309 17:13:25 -- scripts/common.sh@337 -- # local 'op=<' 00:04:05.309 17:13:25 -- scripts/common.sh@339 -- # ver1_l=2 00:04:05.309 17:13:25 -- scripts/common.sh@340 -- # ver2_l=1 00:04:05.309 17:13:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:05.309 17:13:25 -- scripts/common.sh@343 -- # case "$op" in 00:04:05.309 17:13:25 -- scripts/common.sh@344 -- # : 1 00:04:05.309 17:13:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:05.309 17:13:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.309 17:13:25 -- scripts/common.sh@364 -- # decimal 1 00:04:05.309 17:13:25 -- scripts/common.sh@352 -- # local d=1 00:04:05.309 17:13:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.309 17:13:25 -- scripts/common.sh@354 -- # echo 1 00:04:05.309 17:13:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:05.309 17:13:25 -- scripts/common.sh@365 -- # decimal 2 00:04:05.309 17:13:25 -- scripts/common.sh@352 -- # local d=2 00:04:05.309 17:13:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.309 17:13:25 -- scripts/common.sh@354 -- # echo 2 00:04:05.309 17:13:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:05.309 17:13:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:05.309 17:13:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:05.309 17:13:25 -- scripts/common.sh@367 -- # return 0 00:04:05.310 17:13:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.310 17:13:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:05.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.310 --rc genhtml_branch_coverage=1 00:04:05.310 --rc genhtml_function_coverage=1 00:04:05.310 --rc genhtml_legend=1 00:04:05.310 --rc geninfo_all_blocks=1 00:04:05.310 --rc geninfo_unexecuted_blocks=1 00:04:05.310 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:05.310 ' 00:04:05.310 17:13:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:05.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.310 --rc genhtml_branch_coverage=1 00:04:05.310 --rc genhtml_function_coverage=1 00:04:05.310 --rc genhtml_legend=1 00:04:05.310 --rc geninfo_all_blocks=1 00:04:05.310 --rc geninfo_unexecuted_blocks=1 00:04:05.310 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:05.310 ' 00:04:05.310 17:13:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:05.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.310 --rc genhtml_branch_coverage=1 00:04:05.310 --rc genhtml_function_coverage=1 00:04:05.310 --rc genhtml_legend=1 00:04:05.310 --rc geninfo_all_blocks=1 00:04:05.310 --rc geninfo_unexecuted_blocks=1 00:04:05.310 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:05.310 ' 00:04:05.310 17:13:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:05.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.310 --rc genhtml_branch_coverage=1 00:04:05.310 --rc genhtml_function_coverage=1 00:04:05.310 --rc genhtml_legend=1 00:04:05.310 --rc geninfo_all_blocks=1 00:04:05.310 --rc geninfo_unexecuted_blocks=1 00:04:05.310 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:05.310 ' 00:04:05.310 17:13:25 -- setup/driver.sh@68 -- # setup reset 00:04:05.310 17:13:25 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:05.310 17:13:25 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:10.585 17:13:29 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:10.585 17:13:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:10.585 17:13:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:10.585 17:13:29 -- common/autotest_common.sh@10 -- # set +x 00:04:10.585 ************************************ 00:04:10.585 START TEST guess_driver 00:04:10.585 ************************************ 00:04:10.585 17:13:29 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:10.585 17:13:29 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:10.585 17:13:29 -- setup/driver.sh@47 -- # local fail=0 00:04:10.585 17:13:29 -- setup/driver.sh@49 -- # pick_driver 00:04:10.585 17:13:29 -- setup/driver.sh@36 -- # vfio 00:04:10.585 17:13:29 -- setup/driver.sh@21 -- # local iommu_grups 00:04:10.585 17:13:29 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:10.585 17:13:30 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:10.585 17:13:30 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:10.585 17:13:30 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:10.585 17:13:30 -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:04:10.585 17:13:30 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:10.585 17:13:30 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:10.585 17:13:30 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:10.585 17:13:30 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:10.585 17:13:30 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:10.585 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:10.585 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:10.585 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:10.585 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:10.585 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:10.585 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:10.585 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:10.585 17:13:30 -- setup/driver.sh@30 -- # return 0 00:04:10.585 17:13:30 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:10.585 17:13:30 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:10.585 17:13:30 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:10.585 17:13:30 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:10.585 Looking for driver=vfio-pci 00:04:10.585 17:13:30 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.585 17:13:30 -- setup/driver.sh@45 -- # setup output config 00:04:10.585 17:13:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.585 17:13:30 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:13.880 17:13:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.880 17:13:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.880 17:13:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.880 17:13:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.880 17:13:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.880 17:13:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.880 17:13:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.880 17:13:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.880 17:13:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.880 17:13:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.880 17:13:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.880 17:13:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.880 17:13:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.880 17:13:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.880 17:13:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.880 17:13:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.880 17:13:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.880 17:13:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.880 17:13:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.880 17:13:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.880 17:13:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.880 17:13:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.880 17:13:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.880 17:13:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.880 17:13:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.880 17:13:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.880 17:13:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.880 17:13:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.880 17:13:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.880 17:13:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.880 17:13:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.880 17:13:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.880 17:13:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.880 17:13:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.880 17:13:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.880 17:13:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.880 17:13:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.880 17:13:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.880 17:13:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.880 17:13:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.880 17:13:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.880 17:13:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.880 17:13:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.880 17:13:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.880 17:13:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.880 17:13:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.880 17:13:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.880 17:13:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.262 17:13:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.262 17:13:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.262 17:13:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.522 17:13:35 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:15.522 17:13:35 -- setup/driver.sh@65 -- # setup reset 00:04:15.522 17:13:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:15.522 17:13:35 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.722 00:04:19.722 real 0m9.313s 00:04:19.722 user 0m2.329s 00:04:19.722 sys 0m4.595s 00:04:19.722 17:13:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:19.722 17:13:39 -- common/autotest_common.sh@10 -- # set +x 00:04:19.722 ************************************ 00:04:19.722 END TEST guess_driver 00:04:19.722 ************************************ 00:04:19.722 00:04:19.722 real 0m14.463s 00:04:19.722 user 0m3.816s 00:04:19.722 sys 0m7.512s 00:04:19.722 17:13:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:19.722 17:13:39 -- common/autotest_common.sh@10 -- # set +x 00:04:19.722 ************************************ 00:04:19.722 END TEST driver 00:04:19.722 ************************************ 00:04:19.722 17:13:39 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:19.722 17:13:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:19.722 17:13:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:19.722 17:13:39 -- common/autotest_common.sh@10 -- # set +x 00:04:19.722 ************************************ 00:04:19.722 START TEST devices 00:04:19.722 ************************************ 00:04:19.722 17:13:39 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:19.722 * Looking for test storage... 00:04:19.982 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:19.982 17:13:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:19.982 17:13:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:19.982 17:13:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:19.982 17:13:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:19.982 17:13:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:19.982 17:13:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:19.982 17:13:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:19.982 17:13:39 -- scripts/common.sh@335 -- # IFS=.-: 00:04:19.982 17:13:39 -- scripts/common.sh@335 -- # read -ra ver1 00:04:19.982 17:13:39 -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.982 17:13:39 -- scripts/common.sh@336 -- # read -ra ver2 00:04:19.982 17:13:39 -- scripts/common.sh@337 -- # local 'op=<' 00:04:19.982 17:13:39 -- scripts/common.sh@339 -- # ver1_l=2 00:04:19.982 17:13:39 -- scripts/common.sh@340 -- # ver2_l=1 00:04:19.982 17:13:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:19.982 17:13:39 -- scripts/common.sh@343 -- # case "$op" in 00:04:19.982 17:13:39 -- scripts/common.sh@344 -- # : 1 00:04:19.982 17:13:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:19.982 17:13:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.982 17:13:39 -- scripts/common.sh@364 -- # decimal 1 00:04:19.982 17:13:39 -- scripts/common.sh@352 -- # local d=1 00:04:19.982 17:13:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.982 17:13:39 -- scripts/common.sh@354 -- # echo 1 00:04:19.982 17:13:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:19.982 17:13:39 -- scripts/common.sh@365 -- # decimal 2 00:04:19.982 17:13:39 -- scripts/common.sh@352 -- # local d=2 00:04:19.982 17:13:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.982 17:13:39 -- scripts/common.sh@354 -- # echo 2 00:04:19.982 17:13:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:19.982 17:13:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:19.982 17:13:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:19.982 17:13:39 -- scripts/common.sh@367 -- # return 0 00:04:19.982 17:13:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.982 17:13:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:19.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.982 --rc genhtml_branch_coverage=1 00:04:19.982 --rc genhtml_function_coverage=1 00:04:19.982 --rc genhtml_legend=1 00:04:19.982 --rc geninfo_all_blocks=1 00:04:19.982 --rc geninfo_unexecuted_blocks=1 00:04:19.982 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:19.982 ' 00:04:19.982 17:13:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:19.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.982 --rc genhtml_branch_coverage=1 00:04:19.982 --rc genhtml_function_coverage=1 00:04:19.982 --rc genhtml_legend=1 00:04:19.982 --rc geninfo_all_blocks=1 00:04:19.982 --rc geninfo_unexecuted_blocks=1 00:04:19.982 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:19.982 ' 00:04:19.982 17:13:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:19.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.982 --rc genhtml_branch_coverage=1 00:04:19.982 --rc genhtml_function_coverage=1 00:04:19.982 --rc genhtml_legend=1 00:04:19.982 --rc geninfo_all_blocks=1 00:04:19.982 --rc geninfo_unexecuted_blocks=1 00:04:19.982 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:19.982 ' 00:04:19.982 17:13:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:19.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.982 --rc genhtml_branch_coverage=1 00:04:19.982 --rc genhtml_function_coverage=1 00:04:19.982 --rc genhtml_legend=1 00:04:19.982 --rc geninfo_all_blocks=1 00:04:19.982 --rc geninfo_unexecuted_blocks=1 00:04:19.982 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:19.982 ' 00:04:19.982 17:13:39 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:19.982 17:13:39 -- setup/devices.sh@192 -- # setup reset 00:04:19.982 17:13:39 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:19.982 17:13:39 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:24.180 17:13:43 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:24.180 17:13:43 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:24.180 17:13:43 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:24.180 17:13:43 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:24.180 17:13:43 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:24.180 17:13:43 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:24.180 17:13:43 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:24.180 17:13:43 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:24.180 17:13:43 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:24.180 17:13:43 -- setup/devices.sh@196 -- # blocks=() 00:04:24.180 17:13:43 -- setup/devices.sh@196 -- # declare -a blocks 00:04:24.180 17:13:43 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:24.180 17:13:43 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:24.180 17:13:43 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:24.180 17:13:43 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:24.180 17:13:43 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:24.180 17:13:43 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:24.180 17:13:43 -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:24.180 17:13:43 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:24.180 17:13:43 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:24.180 17:13:43 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:24.180 17:13:43 -- scripts/common.sh@389 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:24.180 No valid GPT data, bailing 00:04:24.180 17:13:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:24.180 17:13:43 -- scripts/common.sh@393 -- # pt= 00:04:24.180 17:13:43 -- scripts/common.sh@394 -- # return 1 00:04:24.180 17:13:43 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:24.180 17:13:43 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:24.180 17:13:43 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:24.180 17:13:43 -- setup/common.sh@80 -- # echo 1600321314816 00:04:24.180 17:13:43 -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:04:24.180 17:13:43 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:24.180 17:13:43 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:24.180 17:13:43 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:24.180 17:13:43 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:24.180 17:13:43 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:24.180 17:13:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:24.180 17:13:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.180 17:13:43 -- common/autotest_common.sh@10 -- # set +x 00:04:24.180 ************************************ 00:04:24.180 START TEST nvme_mount 00:04:24.180 ************************************ 00:04:24.180 17:13:43 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:24.180 17:13:43 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:24.180 17:13:43 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:24.180 17:13:43 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.180 17:13:43 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.180 17:13:43 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:24.180 17:13:43 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:24.180 17:13:43 -- setup/common.sh@40 -- # local part_no=1 00:04:24.180 17:13:43 -- setup/common.sh@41 -- # local size=1073741824 00:04:24.180 17:13:43 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:24.180 17:13:43 -- setup/common.sh@44 -- # parts=() 00:04:24.180 17:13:43 -- setup/common.sh@44 -- # local parts 00:04:24.180 17:13:43 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:24.180 17:13:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.180 17:13:43 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:24.180 17:13:43 -- setup/common.sh@46 -- # (( part++ )) 00:04:24.180 17:13:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.180 17:13:43 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:24.180 17:13:43 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:24.180 17:13:43 -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:24.749 Creating new GPT entries in memory. 00:04:24.749 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:24.749 other utilities. 00:04:24.749 17:13:44 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:24.749 17:13:44 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:24.749 17:13:44 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:24.749 17:13:44 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:24.749 17:13:44 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:25.687 Creating new GPT entries in memory. 00:04:25.687 The operation has completed successfully. 00:04:25.687 17:13:45 -- setup/common.sh@57 -- # (( part++ )) 00:04:25.687 17:13:45 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.687 17:13:45 -- setup/common.sh@62 -- # wait 325051 00:04:25.947 17:13:45 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.947 17:13:45 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:25.947 17:13:45 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.947 17:13:45 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:25.947 17:13:45 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:25.947 17:13:45 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.947 17:13:45 -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.947 17:13:45 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:25.947 17:13:45 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:25.947 17:13:45 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.947 17:13:45 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.947 17:13:45 -- setup/devices.sh@53 -- # local found=0 00:04:25.947 17:13:45 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:25.947 17:13:45 -- setup/devices.sh@56 -- # : 00:04:25.947 17:13:45 -- setup/devices.sh@59 -- # local pci status 00:04:25.947 17:13:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.947 17:13:45 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:25.947 17:13:45 -- setup/devices.sh@47 -- # setup output config 00:04:25.947 17:13:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.947 17:13:45 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:29.355 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.355 17:13:48 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:29.355 17:13:48 -- setup/devices.sh@63 -- # found=1 00:04:29.355 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.355 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.355 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.355 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.355 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.355 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.355 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.355 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.355 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.355 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.355 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.355 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.355 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.355 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.355 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.355 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.355 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.355 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.355 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.355 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.355 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.355 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.355 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.355 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.355 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.355 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.355 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.355 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.355 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.355 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.356 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.356 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.356 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.356 17:13:48 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:29.356 17:13:48 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:29.356 17:13:48 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.356 17:13:48 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:29.356 17:13:48 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.356 17:13:48 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:29.356 17:13:48 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.356 17:13:48 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.356 17:13:48 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:29.356 17:13:48 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:29.356 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:29.356 17:13:48 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:29.356 17:13:48 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:29.615 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:29.615 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:29.615 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:29.615 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:29.615 17:13:49 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:29.615 17:13:49 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:29.615 17:13:49 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.615 17:13:49 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:29.615 17:13:49 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:29.615 17:13:49 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.615 17:13:49 -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.616 17:13:49 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:29.616 17:13:49 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:29.616 17:13:49 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.616 17:13:49 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.616 17:13:49 -- setup/devices.sh@53 -- # local found=0 00:04:29.616 17:13:49 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:29.616 17:13:49 -- setup/devices.sh@56 -- # : 00:04:29.616 17:13:49 -- setup/devices.sh@59 -- # local pci status 00:04:29.616 17:13:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.616 17:13:49 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:29.616 17:13:49 -- setup/devices.sh@47 -- # setup output config 00:04:29.616 17:13:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.616 17:13:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:32.906 17:13:52 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.906 17:13:52 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:32.906 17:13:52 -- setup/devices.sh@63 -- # found=1 00:04:32.907 17:13:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.907 17:13:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.907 17:13:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.907 17:13:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.907 17:13:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.907 17:13:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.907 17:13:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.907 17:13:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.907 17:13:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.907 17:13:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.907 17:13:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.907 17:13:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.907 17:13:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.907 17:13:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.907 17:13:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.907 17:13:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.907 17:13:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.907 17:13:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.907 17:13:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.907 17:13:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.907 17:13:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.907 17:13:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.907 17:13:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.907 17:13:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.907 17:13:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.907 17:13:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.907 17:13:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.907 17:13:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.907 17:13:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.907 17:13:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.907 17:13:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.907 17:13:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.907 17:13:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.907 17:13:52 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:32.907 17:13:52 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:32.907 17:13:52 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.907 17:13:52 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:32.907 17:13:52 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:32.907 17:13:52 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.907 17:13:52 -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:32.907 17:13:52 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:32.907 17:13:52 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:32.907 17:13:52 -- setup/devices.sh@50 -- # local mount_point= 00:04:32.907 17:13:52 -- setup/devices.sh@51 -- # local test_file= 00:04:32.907 17:13:52 -- setup/devices.sh@53 -- # local found=0 00:04:32.907 17:13:52 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:32.907 17:13:52 -- setup/devices.sh@59 -- # local pci status 00:04:32.907 17:13:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.907 17:13:52 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:32.907 17:13:52 -- setup/devices.sh@47 -- # setup output config 00:04:32.907 17:13:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.907 17:13:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:36.195 17:13:55 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.196 17:13:55 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:36.196 17:13:55 -- setup/devices.sh@63 -- # found=1 00:04:36.196 17:13:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.196 17:13:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.196 17:13:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.196 17:13:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.196 17:13:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.196 17:13:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.196 17:13:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.196 17:13:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.196 17:13:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.196 17:13:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.196 17:13:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.196 17:13:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.196 17:13:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.196 17:13:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.196 17:13:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.196 17:13:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.196 17:13:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.196 17:13:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.196 17:13:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.196 17:13:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.196 17:13:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.196 17:13:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.196 17:13:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.196 17:13:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.196 17:13:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.196 17:13:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.196 17:13:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.196 17:13:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.196 17:13:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.196 17:13:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.196 17:13:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.196 17:13:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.196 17:13:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.196 17:13:55 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:36.196 17:13:55 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:36.196 17:13:55 -- setup/devices.sh@68 -- # return 0 00:04:36.196 17:13:55 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:36.196 17:13:55 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.196 17:13:55 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:36.196 17:13:55 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:36.196 17:13:55 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:36.196 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:36.196 00:04:36.196 real 0m12.584s 00:04:36.196 user 0m3.750s 00:04:36.196 sys 0m6.801s 00:04:36.196 17:13:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:36.196 17:13:55 -- common/autotest_common.sh@10 -- # set +x 00:04:36.196 ************************************ 00:04:36.196 END TEST nvme_mount 00:04:36.196 ************************************ 00:04:36.456 17:13:55 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:36.456 17:13:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:36.456 17:13:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:36.456 17:13:55 -- common/autotest_common.sh@10 -- # set +x 00:04:36.456 ************************************ 00:04:36.456 START TEST dm_mount 00:04:36.456 ************************************ 00:04:36.456 17:13:55 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:36.456 17:13:55 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:36.456 17:13:55 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:36.456 17:13:55 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:36.456 17:13:55 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:36.456 17:13:55 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:36.456 17:13:55 -- setup/common.sh@40 -- # local part_no=2 00:04:36.456 17:13:55 -- setup/common.sh@41 -- # local size=1073741824 00:04:36.456 17:13:55 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:36.456 17:13:55 -- setup/common.sh@44 -- # parts=() 00:04:36.456 17:13:55 -- setup/common.sh@44 -- # local parts 00:04:36.456 17:13:55 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:36.456 17:13:55 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:36.456 17:13:55 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:36.456 17:13:55 -- setup/common.sh@46 -- # (( part++ )) 00:04:36.456 17:13:55 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:36.456 17:13:55 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:36.456 17:13:55 -- setup/common.sh@46 -- # (( part++ )) 00:04:36.456 17:13:55 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:36.456 17:13:55 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:36.456 17:13:55 -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:36.456 17:13:55 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:37.394 Creating new GPT entries in memory. 00:04:37.394 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:37.394 other utilities. 00:04:37.394 17:13:56 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:37.394 17:13:56 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:37.394 17:13:56 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:37.394 17:13:56 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:37.394 17:13:56 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:38.333 Creating new GPT entries in memory. 00:04:38.333 The operation has completed successfully. 00:04:38.333 17:13:58 -- setup/common.sh@57 -- # (( part++ )) 00:04:38.333 17:13:58 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:38.333 17:13:58 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:38.333 17:13:58 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:38.333 17:13:58 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:39.713 The operation has completed successfully. 00:04:39.713 17:13:59 -- setup/common.sh@57 -- # (( part++ )) 00:04:39.713 17:13:59 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:39.713 17:13:59 -- setup/common.sh@62 -- # wait 329557 00:04:39.713 17:13:59 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:39.713 17:13:59 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:39.713 17:13:59 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:39.713 17:13:59 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:39.713 17:13:59 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:39.713 17:13:59 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:39.713 17:13:59 -- setup/devices.sh@161 -- # break 00:04:39.713 17:13:59 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:39.713 17:13:59 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:39.713 17:13:59 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:39.713 17:13:59 -- setup/devices.sh@166 -- # dm=dm-0 00:04:39.713 17:13:59 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:39.713 17:13:59 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:39.713 17:13:59 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:39.713 17:13:59 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:04:39.713 17:13:59 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:39.713 17:13:59 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:39.713 17:13:59 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:39.713 17:13:59 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:39.713 17:13:59 -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:39.713 17:13:59 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:39.713 17:13:59 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:39.713 17:13:59 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:39.713 17:13:59 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:39.713 17:13:59 -- setup/devices.sh@53 -- # local found=0 00:04:39.713 17:13:59 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:39.713 17:13:59 -- setup/devices.sh@56 -- # : 00:04:39.713 17:13:59 -- setup/devices.sh@59 -- # local pci status 00:04:39.713 17:13:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.713 17:13:59 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:39.713 17:13:59 -- setup/devices.sh@47 -- # setup output config 00:04:39.713 17:13:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.713 17:13:59 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:42.250 17:14:02 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.250 17:14:02 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:42.250 17:14:02 -- setup/devices.sh@63 -- # found=1 00:04:42.250 17:14:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.510 17:14:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.510 17:14:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.510 17:14:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.510 17:14:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.510 17:14:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.510 17:14:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.510 17:14:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.510 17:14:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.510 17:14:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.510 17:14:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.510 17:14:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.510 17:14:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.510 17:14:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.510 17:14:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.510 17:14:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.510 17:14:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.510 17:14:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.510 17:14:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.510 17:14:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.510 17:14:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.510 17:14:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.510 17:14:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.510 17:14:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.510 17:14:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.510 17:14:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.510 17:14:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.510 17:14:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.510 17:14:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.510 17:14:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.510 17:14:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.510 17:14:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.510 17:14:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.510 17:14:02 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:42.510 17:14:02 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:42.510 17:14:02 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:42.510 17:14:02 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:42.510 17:14:02 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:42.510 17:14:02 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:42.510 17:14:02 -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:42.510 17:14:02 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:42.510 17:14:02 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:42.510 17:14:02 -- setup/devices.sh@50 -- # local mount_point= 00:04:42.510 17:14:02 -- setup/devices.sh@51 -- # local test_file= 00:04:42.510 17:14:02 -- setup/devices.sh@53 -- # local found=0 00:04:42.510 17:14:02 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:42.510 17:14:02 -- setup/devices.sh@59 -- # local pci status 00:04:42.510 17:14:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.510 17:14:02 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:42.510 17:14:02 -- setup/devices.sh@47 -- # setup output config 00:04:42.510 17:14:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.510 17:14:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:45.803 17:14:05 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.804 17:14:05 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:45.804 17:14:05 -- setup/devices.sh@63 -- # found=1 00:04:45.804 17:14:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.804 17:14:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.804 17:14:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.804 17:14:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.804 17:14:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.804 17:14:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.804 17:14:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.804 17:14:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.804 17:14:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.804 17:14:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.804 17:14:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.804 17:14:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.804 17:14:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.804 17:14:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.804 17:14:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.804 17:14:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.804 17:14:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.804 17:14:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.804 17:14:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.804 17:14:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.804 17:14:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.804 17:14:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.804 17:14:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.804 17:14:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.804 17:14:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.804 17:14:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.804 17:14:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.804 17:14:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.804 17:14:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.804 17:14:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.804 17:14:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.804 17:14:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.804 17:14:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.804 17:14:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:45.804 17:14:05 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:45.804 17:14:05 -- setup/devices.sh@68 -- # return 0 00:04:45.804 17:14:05 -- setup/devices.sh@187 -- # cleanup_dm 00:04:45.804 17:14:05 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:45.804 17:14:05 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:45.804 17:14:05 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:45.804 17:14:05 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:45.804 17:14:05 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:45.804 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:45.804 17:14:05 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:45.804 17:14:05 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:45.804 00:04:45.804 real 0m9.471s 00:04:45.804 user 0m2.025s 00:04:45.804 sys 0m4.430s 00:04:45.804 17:14:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:45.804 17:14:05 -- common/autotest_common.sh@10 -- # set +x 00:04:45.804 ************************************ 00:04:45.804 END TEST dm_mount 00:04:45.804 ************************************ 00:04:45.804 17:14:05 -- setup/devices.sh@1 -- # cleanup 00:04:45.804 17:14:05 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:45.804 17:14:05 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.804 17:14:05 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:45.804 17:14:05 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:45.804 17:14:05 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:45.804 17:14:05 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:46.064 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:46.064 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:46.064 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:46.064 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:46.064 17:14:05 -- setup/devices.sh@12 -- # cleanup_dm 00:04:46.064 17:14:05 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:46.064 17:14:05 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:46.064 17:14:05 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:46.064 17:14:05 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:46.064 17:14:05 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:46.064 17:14:05 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:46.064 00:04:46.064 real 0m26.389s 00:04:46.064 user 0m7.249s 00:04:46.064 sys 0m14.015s 00:04:46.064 17:14:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:46.064 17:14:05 -- common/autotest_common.sh@10 -- # set +x 00:04:46.064 ************************************ 00:04:46.064 END TEST devices 00:04:46.064 ************************************ 00:04:46.064 00:04:46.064 real 1m32.140s 00:04:46.064 user 0m28.139s 00:04:46.064 sys 0m52.656s 00:04:46.064 17:14:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:46.064 17:14:05 -- common/autotest_common.sh@10 -- # set +x 00:04:46.064 ************************************ 00:04:46.064 END TEST setup.sh 00:04:46.064 ************************************ 00:04:46.324 17:14:05 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:04:49.618 Hugepages 00:04:49.618 node hugesize free / total 00:04:49.618 node0 1048576kB 0 / 0 00:04:49.618 node0 2048kB 2048 / 2048 00:04:49.618 node1 1048576kB 0 / 0 00:04:49.618 node1 2048kB 0 / 0 00:04:49.618 00:04:49.618 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:49.618 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:49.618 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:49.618 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:49.618 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:49.618 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:49.618 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:49.618 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:49.618 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:49.618 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:49.618 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:49.618 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:49.618 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:49.618 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:49.618 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:49.618 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:49.618 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:49.618 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:49.618 17:14:09 -- spdk/autotest.sh@128 -- # uname -s 00:04:49.618 17:14:09 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:49.618 17:14:09 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:49.618 17:14:09 -- common/autotest_common.sh@1526 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:52.919 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:52.919 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:52.919 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:52.919 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:53.178 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:53.178 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:53.178 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:53.178 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:53.178 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:53.178 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:53.178 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:53.178 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:53.178 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:53.178 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:53.178 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:53.178 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:54.556 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:54.815 17:14:14 -- common/autotest_common.sh@1527 -- # sleep 1 00:04:55.753 17:14:15 -- common/autotest_common.sh@1528 -- # bdfs=() 00:04:55.753 17:14:15 -- common/autotest_common.sh@1528 -- # local bdfs 00:04:55.753 17:14:15 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:04:55.753 17:14:15 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:04:55.753 17:14:15 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:55.753 17:14:15 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:55.753 17:14:15 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:55.753 17:14:15 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:55.753 17:14:15 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:55.753 17:14:15 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:04:55.753 17:14:15 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:d8:00.0 00:04:55.753 17:14:15 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:59.043 Waiting for block devices as requested 00:04:59.043 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:59.043 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:59.043 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:59.303 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:59.303 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:59.303 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:59.563 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:59.563 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:59.563 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:59.822 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:59.822 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:59.822 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:59.822 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:00.080 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:00.080 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:00.080 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:00.338 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:00.338 17:14:20 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:00.338 17:14:20 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:00.596 17:14:20 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 00:05:00.596 17:14:20 -- common/autotest_common.sh@1497 -- # grep 0000:d8:00.0/nvme/nvme 00:05:00.596 17:14:20 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:00.596 17:14:20 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:00.596 17:14:20 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:00.597 17:14:20 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:00.597 17:14:20 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:00.597 17:14:20 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:00.597 17:14:20 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:00.597 17:14:20 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:00.597 17:14:20 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:00.597 17:14:20 -- common/autotest_common.sh@1540 -- # oacs=' 0xe' 00:05:00.597 17:14:20 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:00.597 17:14:20 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:00.597 17:14:20 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:00.597 17:14:20 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:00.597 17:14:20 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:00.597 17:14:20 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:00.597 17:14:20 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:00.597 17:14:20 -- common/autotest_common.sh@1552 -- # continue 00:05:00.597 17:14:20 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:00.597 17:14:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:00.597 17:14:20 -- common/autotest_common.sh@10 -- # set +x 00:05:00.597 17:14:20 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:00.597 17:14:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:00.597 17:14:20 -- common/autotest_common.sh@10 -- # set +x 00:05:00.597 17:14:20 -- spdk/autotest.sh@137 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:03.887 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:03.887 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:03.887 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:03.887 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:03.887 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:03.887 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:03.887 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:04.146 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:04.146 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:04.146 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:04.146 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:04.146 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:04.146 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:04.146 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:04.146 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:04.146 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:05.525 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:05.848 17:14:25 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:05.848 17:14:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:05.848 17:14:25 -- common/autotest_common.sh@10 -- # set +x 00:05:05.848 17:14:25 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:05.848 17:14:25 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:05.848 17:14:25 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:05.848 17:14:25 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:05.848 17:14:25 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:05.848 17:14:25 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:05.848 17:14:25 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:05.848 17:14:25 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:05.848 17:14:25 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:05.848 17:14:25 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:05.848 17:14:25 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:05.848 17:14:25 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:05:05.848 17:14:25 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:d8:00.0 00:05:05.848 17:14:25 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:05.848 17:14:25 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:05.848 17:14:25 -- common/autotest_common.sh@1575 -- # device=0x0a54 00:05:05.848 17:14:25 -- common/autotest_common.sh@1576 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:05.848 17:14:25 -- common/autotest_common.sh@1577 -- # bdfs+=($bdf) 00:05:05.848 17:14:25 -- common/autotest_common.sh@1581 -- # printf '%s\n' 0000:d8:00.0 00:05:05.848 17:14:25 -- common/autotest_common.sh@1587 -- # [[ -z 0000:d8:00.0 ]] 00:05:05.848 17:14:25 -- common/autotest_common.sh@1592 -- # spdk_tgt_pid=339485 00:05:05.848 17:14:25 -- common/autotest_common.sh@1593 -- # waitforlisten 339485 00:05:05.848 17:14:25 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.848 17:14:25 -- common/autotest_common.sh@829 -- # '[' -z 339485 ']' 00:05:05.848 17:14:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.848 17:14:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.848 17:14:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.848 17:14:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.848 17:14:25 -- common/autotest_common.sh@10 -- # set +x 00:05:05.848 [2024-11-09 17:14:25.534889] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:05.848 [2024-11-09 17:14:25.534951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid339485 ] 00:05:05.848 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.848 [2024-11-09 17:14:25.604211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.108 [2024-11-09 17:14:25.682460] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:06.108 [2024-11-09 17:14:25.682569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.677 17:14:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:06.677 17:14:26 -- common/autotest_common.sh@862 -- # return 0 00:05:06.677 17:14:26 -- common/autotest_common.sh@1595 -- # bdf_id=0 00:05:06.677 17:14:26 -- common/autotest_common.sh@1596 -- # for bdf in "${bdfs[@]}" 00:05:06.677 17:14:26 -- common/autotest_common.sh@1597 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:09.972 nvme0n1 00:05:09.972 17:14:29 -- common/autotest_common.sh@1599 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:09.972 [2024-11-09 17:14:29.548189] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:09.972 request: 00:05:09.972 { 00:05:09.972 "nvme_ctrlr_name": "nvme0", 00:05:09.972 "password": "test", 00:05:09.972 "method": "bdev_nvme_opal_revert", 00:05:09.972 "req_id": 1 00:05:09.972 } 00:05:09.972 Got JSON-RPC error response 00:05:09.972 response: 00:05:09.972 { 00:05:09.972 "code": -32602, 00:05:09.972 "message": "Invalid parameters" 00:05:09.972 } 00:05:09.972 17:14:29 -- common/autotest_common.sh@1599 -- # true 00:05:09.972 17:14:29 -- common/autotest_common.sh@1600 -- # (( ++bdf_id )) 00:05:09.972 17:14:29 -- common/autotest_common.sh@1603 -- # killprocess 339485 00:05:09.972 17:14:29 -- common/autotest_common.sh@936 -- # '[' -z 339485 ']' 00:05:09.972 17:14:29 -- common/autotest_common.sh@940 -- # kill -0 339485 00:05:09.972 17:14:29 -- common/autotest_common.sh@941 -- # uname 00:05:09.972 17:14:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:09.972 17:14:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 339485 00:05:09.972 17:14:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:09.972 17:14:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:09.972 17:14:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 339485' 00:05:09.972 killing process with pid 339485 00:05:09.972 17:14:29 -- common/autotest_common.sh@955 -- # kill 339485 00:05:09.972 17:14:29 -- common/autotest_common.sh@960 -- # wait 339485 00:05:12.507 17:14:31 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:12.507 17:14:31 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:12.507 17:14:31 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:12.507 17:14:31 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:12.507 17:14:31 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:12.507 17:14:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:12.507 17:14:31 -- common/autotest_common.sh@10 -- # set +x 00:05:12.507 17:14:31 -- spdk/autotest.sh@162 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:12.507 17:14:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:12.507 17:14:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.507 17:14:31 -- common/autotest_common.sh@10 -- # set +x 00:05:12.507 ************************************ 00:05:12.507 START TEST env 00:05:12.507 ************************************ 00:05:12.508 17:14:31 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:12.508 * Looking for test storage... 00:05:12.508 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:05:12.508 17:14:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:12.508 17:14:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:12.508 17:14:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:12.508 17:14:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:12.508 17:14:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:12.508 17:14:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:12.508 17:14:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:12.508 17:14:31 -- scripts/common.sh@335 -- # IFS=.-: 00:05:12.508 17:14:31 -- scripts/common.sh@335 -- # read -ra ver1 00:05:12.508 17:14:31 -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.508 17:14:31 -- scripts/common.sh@336 -- # read -ra ver2 00:05:12.508 17:14:31 -- scripts/common.sh@337 -- # local 'op=<' 00:05:12.508 17:14:31 -- scripts/common.sh@339 -- # ver1_l=2 00:05:12.508 17:14:31 -- scripts/common.sh@340 -- # ver2_l=1 00:05:12.508 17:14:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:12.508 17:14:31 -- scripts/common.sh@343 -- # case "$op" in 00:05:12.508 17:14:31 -- scripts/common.sh@344 -- # : 1 00:05:12.508 17:14:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:12.508 17:14:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.508 17:14:31 -- scripts/common.sh@364 -- # decimal 1 00:05:12.508 17:14:31 -- scripts/common.sh@352 -- # local d=1 00:05:12.508 17:14:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.508 17:14:31 -- scripts/common.sh@354 -- # echo 1 00:05:12.508 17:14:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:12.508 17:14:31 -- scripts/common.sh@365 -- # decimal 2 00:05:12.508 17:14:31 -- scripts/common.sh@352 -- # local d=2 00:05:12.508 17:14:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.508 17:14:31 -- scripts/common.sh@354 -- # echo 2 00:05:12.508 17:14:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:12.508 17:14:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:12.508 17:14:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:12.508 17:14:31 -- scripts/common.sh@367 -- # return 0 00:05:12.508 17:14:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.508 17:14:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:12.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.508 --rc genhtml_branch_coverage=1 00:05:12.508 --rc genhtml_function_coverage=1 00:05:12.508 --rc genhtml_legend=1 00:05:12.508 --rc geninfo_all_blocks=1 00:05:12.508 --rc geninfo_unexecuted_blocks=1 00:05:12.508 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:12.508 ' 00:05:12.508 17:14:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:12.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.508 --rc genhtml_branch_coverage=1 00:05:12.508 --rc genhtml_function_coverage=1 00:05:12.508 --rc genhtml_legend=1 00:05:12.508 --rc geninfo_all_blocks=1 00:05:12.508 --rc geninfo_unexecuted_blocks=1 00:05:12.508 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:12.508 ' 00:05:12.508 17:14:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:12.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.508 --rc genhtml_branch_coverage=1 00:05:12.508 --rc genhtml_function_coverage=1 00:05:12.508 --rc genhtml_legend=1 00:05:12.508 --rc geninfo_all_blocks=1 00:05:12.508 --rc geninfo_unexecuted_blocks=1 00:05:12.508 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:12.508 ' 00:05:12.508 17:14:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:12.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.508 --rc genhtml_branch_coverage=1 00:05:12.508 --rc genhtml_function_coverage=1 00:05:12.508 --rc genhtml_legend=1 00:05:12.508 --rc geninfo_all_blocks=1 00:05:12.508 --rc geninfo_unexecuted_blocks=1 00:05:12.508 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:12.508 ' 00:05:12.508 17:14:32 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:12.508 17:14:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:12.508 17:14:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.508 17:14:32 -- common/autotest_common.sh@10 -- # set +x 00:05:12.508 ************************************ 00:05:12.508 START TEST env_memory 00:05:12.508 ************************************ 00:05:12.508 17:14:32 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:12.508 00:05:12.508 00:05:12.508 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.508 http://cunit.sourceforge.net/ 00:05:12.508 00:05:12.508 00:05:12.508 Suite: memory 00:05:12.508 Test: alloc and free memory map ...[2024-11-09 17:14:32.043082] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:12.508 passed 00:05:12.508 Test: mem map translation ...[2024-11-09 17:14:32.056089] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:12.508 [2024-11-09 17:14:32.056113] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:12.508 [2024-11-09 17:14:32.056158] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:12.508 [2024-11-09 17:14:32.056170] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:12.508 passed 00:05:12.508 Test: mem map registration ...[2024-11-09 17:14:32.075722] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:12.508 [2024-11-09 17:14:32.075741] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:12.508 passed 00:05:12.508 Test: mem map adjacent registrations ...passed 00:05:12.508 00:05:12.508 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.508 suites 1 1 n/a 0 0 00:05:12.508 tests 4 4 4 0 0 00:05:12.508 asserts 152 152 152 0 n/a 00:05:12.508 00:05:12.508 Elapsed time = 0.082 seconds 00:05:12.508 00:05:12.508 real 0m0.096s 00:05:12.508 user 0m0.083s 00:05:12.508 sys 0m0.012s 00:05:12.508 17:14:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:12.508 17:14:32 -- common/autotest_common.sh@10 -- # set +x 00:05:12.508 ************************************ 00:05:12.508 END TEST env_memory 00:05:12.508 ************************************ 00:05:12.508 17:14:32 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:12.508 17:14:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:12.508 17:14:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.508 17:14:32 -- common/autotest_common.sh@10 -- # set +x 00:05:12.508 ************************************ 00:05:12.508 START TEST env_vtophys 00:05:12.508 ************************************ 00:05:12.508 17:14:32 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:12.508 EAL: lib.eal log level changed from notice to debug 00:05:12.508 EAL: Detected lcore 0 as core 0 on socket 0 00:05:12.508 EAL: Detected lcore 1 as core 1 on socket 0 00:05:12.508 EAL: Detected lcore 2 as core 2 on socket 0 00:05:12.508 EAL: Detected lcore 3 as core 3 on socket 0 00:05:12.508 EAL: Detected lcore 4 as core 4 on socket 0 00:05:12.508 EAL: Detected lcore 5 as core 5 on socket 0 00:05:12.508 EAL: Detected lcore 6 as core 6 on socket 0 00:05:12.508 EAL: Detected lcore 7 as core 8 on socket 0 00:05:12.508 EAL: Detected lcore 8 as core 9 on socket 0 00:05:12.508 EAL: Detected lcore 9 as core 10 on socket 0 00:05:12.508 EAL: Detected lcore 10 as core 11 on socket 0 00:05:12.508 EAL: Detected lcore 11 as core 12 on socket 0 00:05:12.508 EAL: Detected lcore 12 as core 13 on socket 0 00:05:12.508 EAL: Detected lcore 13 as core 14 on socket 0 00:05:12.508 EAL: Detected lcore 14 as core 16 on socket 0 00:05:12.508 EAL: Detected lcore 15 as core 17 on socket 0 00:05:12.508 EAL: Detected lcore 16 as core 18 on socket 0 00:05:12.508 EAL: Detected lcore 17 as core 19 on socket 0 00:05:12.508 EAL: Detected lcore 18 as core 20 on socket 0 00:05:12.508 EAL: Detected lcore 19 as core 21 on socket 0 00:05:12.508 EAL: Detected lcore 20 as core 22 on socket 0 00:05:12.508 EAL: Detected lcore 21 as core 24 on socket 0 00:05:12.508 EAL: Detected lcore 22 as core 25 on socket 0 00:05:12.508 EAL: Detected lcore 23 as core 26 on socket 0 00:05:12.508 EAL: Detected lcore 24 as core 27 on socket 0 00:05:12.508 EAL: Detected lcore 25 as core 28 on socket 0 00:05:12.508 EAL: Detected lcore 26 as core 29 on socket 0 00:05:12.508 EAL: Detected lcore 27 as core 30 on socket 0 00:05:12.508 EAL: Detected lcore 28 as core 0 on socket 1 00:05:12.508 EAL: Detected lcore 29 as core 1 on socket 1 00:05:12.508 EAL: Detected lcore 30 as core 2 on socket 1 00:05:12.508 EAL: Detected lcore 31 as core 3 on socket 1 00:05:12.508 EAL: Detected lcore 32 as core 4 on socket 1 00:05:12.508 EAL: Detected lcore 33 as core 5 on socket 1 00:05:12.508 EAL: Detected lcore 34 as core 6 on socket 1 00:05:12.508 EAL: Detected lcore 35 as core 8 on socket 1 00:05:12.508 EAL: Detected lcore 36 as core 9 on socket 1 00:05:12.508 EAL: Detected lcore 37 as core 10 on socket 1 00:05:12.508 EAL: Detected lcore 38 as core 11 on socket 1 00:05:12.508 EAL: Detected lcore 39 as core 12 on socket 1 00:05:12.508 EAL: Detected lcore 40 as core 13 on socket 1 00:05:12.508 EAL: Detected lcore 41 as core 14 on socket 1 00:05:12.509 EAL: Detected lcore 42 as core 16 on socket 1 00:05:12.509 EAL: Detected lcore 43 as core 17 on socket 1 00:05:12.509 EAL: Detected lcore 44 as core 18 on socket 1 00:05:12.509 EAL: Detected lcore 45 as core 19 on socket 1 00:05:12.509 EAL: Detected lcore 46 as core 20 on socket 1 00:05:12.509 EAL: Detected lcore 47 as core 21 on socket 1 00:05:12.509 EAL: Detected lcore 48 as core 22 on socket 1 00:05:12.509 EAL: Detected lcore 49 as core 24 on socket 1 00:05:12.509 EAL: Detected lcore 50 as core 25 on socket 1 00:05:12.509 EAL: Detected lcore 51 as core 26 on socket 1 00:05:12.509 EAL: Detected lcore 52 as core 27 on socket 1 00:05:12.509 EAL: Detected lcore 53 as core 28 on socket 1 00:05:12.509 EAL: Detected lcore 54 as core 29 on socket 1 00:05:12.509 EAL: Detected lcore 55 as core 30 on socket 1 00:05:12.509 EAL: Detected lcore 56 as core 0 on socket 0 00:05:12.509 EAL: Detected lcore 57 as core 1 on socket 0 00:05:12.509 EAL: Detected lcore 58 as core 2 on socket 0 00:05:12.509 EAL: Detected lcore 59 as core 3 on socket 0 00:05:12.509 EAL: Detected lcore 60 as core 4 on socket 0 00:05:12.509 EAL: Detected lcore 61 as core 5 on socket 0 00:05:12.509 EAL: Detected lcore 62 as core 6 on socket 0 00:05:12.509 EAL: Detected lcore 63 as core 8 on socket 0 00:05:12.509 EAL: Detected lcore 64 as core 9 on socket 0 00:05:12.509 EAL: Detected lcore 65 as core 10 on socket 0 00:05:12.509 EAL: Detected lcore 66 as core 11 on socket 0 00:05:12.509 EAL: Detected lcore 67 as core 12 on socket 0 00:05:12.509 EAL: Detected lcore 68 as core 13 on socket 0 00:05:12.509 EAL: Detected lcore 69 as core 14 on socket 0 00:05:12.509 EAL: Detected lcore 70 as core 16 on socket 0 00:05:12.509 EAL: Detected lcore 71 as core 17 on socket 0 00:05:12.509 EAL: Detected lcore 72 as core 18 on socket 0 00:05:12.509 EAL: Detected lcore 73 as core 19 on socket 0 00:05:12.509 EAL: Detected lcore 74 as core 20 on socket 0 00:05:12.509 EAL: Detected lcore 75 as core 21 on socket 0 00:05:12.509 EAL: Detected lcore 76 as core 22 on socket 0 00:05:12.509 EAL: Detected lcore 77 as core 24 on socket 0 00:05:12.509 EAL: Detected lcore 78 as core 25 on socket 0 00:05:12.509 EAL: Detected lcore 79 as core 26 on socket 0 00:05:12.509 EAL: Detected lcore 80 as core 27 on socket 0 00:05:12.509 EAL: Detected lcore 81 as core 28 on socket 0 00:05:12.509 EAL: Detected lcore 82 as core 29 on socket 0 00:05:12.509 EAL: Detected lcore 83 as core 30 on socket 0 00:05:12.509 EAL: Detected lcore 84 as core 0 on socket 1 00:05:12.509 EAL: Detected lcore 85 as core 1 on socket 1 00:05:12.509 EAL: Detected lcore 86 as core 2 on socket 1 00:05:12.509 EAL: Detected lcore 87 as core 3 on socket 1 00:05:12.509 EAL: Detected lcore 88 as core 4 on socket 1 00:05:12.509 EAL: Detected lcore 89 as core 5 on socket 1 00:05:12.509 EAL: Detected lcore 90 as core 6 on socket 1 00:05:12.509 EAL: Detected lcore 91 as core 8 on socket 1 00:05:12.509 EAL: Detected lcore 92 as core 9 on socket 1 00:05:12.509 EAL: Detected lcore 93 as core 10 on socket 1 00:05:12.509 EAL: Detected lcore 94 as core 11 on socket 1 00:05:12.509 EAL: Detected lcore 95 as core 12 on socket 1 00:05:12.509 EAL: Detected lcore 96 as core 13 on socket 1 00:05:12.509 EAL: Detected lcore 97 as core 14 on socket 1 00:05:12.509 EAL: Detected lcore 98 as core 16 on socket 1 00:05:12.509 EAL: Detected lcore 99 as core 17 on socket 1 00:05:12.509 EAL: Detected lcore 100 as core 18 on socket 1 00:05:12.509 EAL: Detected lcore 101 as core 19 on socket 1 00:05:12.509 EAL: Detected lcore 102 as core 20 on socket 1 00:05:12.509 EAL: Detected lcore 103 as core 21 on socket 1 00:05:12.509 EAL: Detected lcore 104 as core 22 on socket 1 00:05:12.509 EAL: Detected lcore 105 as core 24 on socket 1 00:05:12.509 EAL: Detected lcore 106 as core 25 on socket 1 00:05:12.509 EAL: Detected lcore 107 as core 26 on socket 1 00:05:12.509 EAL: Detected lcore 108 as core 27 on socket 1 00:05:12.509 EAL: Detected lcore 109 as core 28 on socket 1 00:05:12.509 EAL: Detected lcore 110 as core 29 on socket 1 00:05:12.509 EAL: Detected lcore 111 as core 30 on socket 1 00:05:12.509 EAL: Maximum logical cores by configuration: 128 00:05:12.509 EAL: Detected CPU lcores: 112 00:05:12.509 EAL: Detected NUMA nodes: 2 00:05:12.509 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:12.509 EAL: Checking presence of .so 'librte_eal.so.24' 00:05:12.509 EAL: Checking presence of .so 'librte_eal.so' 00:05:12.509 EAL: Detected static linkage of DPDK 00:05:12.509 EAL: No shared files mode enabled, IPC will be disabled 00:05:12.509 EAL: Bus pci wants IOVA as 'DC' 00:05:12.509 EAL: Buses did not request a specific IOVA mode. 00:05:12.509 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:12.509 EAL: Selected IOVA mode 'VA' 00:05:12.509 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.509 EAL: Probing VFIO support... 00:05:12.509 EAL: IOMMU type 1 (Type 1) is supported 00:05:12.509 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:12.509 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:12.509 EAL: VFIO support initialized 00:05:12.509 EAL: Ask a virtual area of 0x2e000 bytes 00:05:12.509 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:12.509 EAL: Setting up physically contiguous memory... 00:05:12.509 EAL: Setting maximum number of open files to 524288 00:05:12.509 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:12.509 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:12.509 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:12.509 EAL: Ask a virtual area of 0x61000 bytes 00:05:12.509 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:12.509 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:12.509 EAL: Ask a virtual area of 0x400000000 bytes 00:05:12.509 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:12.509 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:12.509 EAL: Ask a virtual area of 0x61000 bytes 00:05:12.509 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:12.509 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:12.509 EAL: Ask a virtual area of 0x400000000 bytes 00:05:12.509 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:12.509 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:12.509 EAL: Ask a virtual area of 0x61000 bytes 00:05:12.509 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:12.509 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:12.509 EAL: Ask a virtual area of 0x400000000 bytes 00:05:12.509 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:12.509 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:12.509 EAL: Ask a virtual area of 0x61000 bytes 00:05:12.509 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:12.509 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:12.509 EAL: Ask a virtual area of 0x400000000 bytes 00:05:12.509 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:12.509 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:12.509 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:12.509 EAL: Ask a virtual area of 0x61000 bytes 00:05:12.509 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:12.509 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:12.509 EAL: Ask a virtual area of 0x400000000 bytes 00:05:12.509 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:12.509 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:12.509 EAL: Ask a virtual area of 0x61000 bytes 00:05:12.509 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:12.509 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:12.509 EAL: Ask a virtual area of 0x400000000 bytes 00:05:12.509 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:12.509 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:12.509 EAL: Ask a virtual area of 0x61000 bytes 00:05:12.509 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:12.509 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:12.509 EAL: Ask a virtual area of 0x400000000 bytes 00:05:12.509 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:12.509 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:12.509 EAL: Ask a virtual area of 0x61000 bytes 00:05:12.509 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:12.509 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:12.509 EAL: Ask a virtual area of 0x400000000 bytes 00:05:12.509 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:12.509 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:12.509 EAL: Hugepages will be freed exactly as allocated. 00:05:12.509 EAL: No shared files mode enabled, IPC is disabled 00:05:12.509 EAL: No shared files mode enabled, IPC is disabled 00:05:12.509 EAL: TSC frequency is ~2500000 KHz 00:05:12.509 EAL: Main lcore 0 is ready (tid=7f8a40832a00;cpuset=[0]) 00:05:12.509 EAL: Trying to obtain current memory policy. 00:05:12.509 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:12.509 EAL: Restoring previous memory policy: 0 00:05:12.509 EAL: request: mp_malloc_sync 00:05:12.509 EAL: No shared files mode enabled, IPC is disabled 00:05:12.509 EAL: Heap on socket 0 was expanded by 2MB 00:05:12.509 EAL: No shared files mode enabled, IPC is disabled 00:05:12.509 EAL: Mem event callback 'spdk:(nil)' registered 00:05:12.509 00:05:12.509 00:05:12.509 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.509 http://cunit.sourceforge.net/ 00:05:12.509 00:05:12.509 00:05:12.509 Suite: components_suite 00:05:12.509 Test: vtophys_malloc_test ...passed 00:05:12.509 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:12.509 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:12.509 EAL: Restoring previous memory policy: 4 00:05:12.509 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.509 EAL: request: mp_malloc_sync 00:05:12.509 EAL: No shared files mode enabled, IPC is disabled 00:05:12.509 EAL: Heap on socket 0 was expanded by 4MB 00:05:12.509 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.509 EAL: request: mp_malloc_sync 00:05:12.509 EAL: No shared files mode enabled, IPC is disabled 00:05:12.509 EAL: Heap on socket 0 was shrunk by 4MB 00:05:12.509 EAL: Trying to obtain current memory policy. 00:05:12.509 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:12.509 EAL: Restoring previous memory policy: 4 00:05:12.509 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.509 EAL: request: mp_malloc_sync 00:05:12.509 EAL: No shared files mode enabled, IPC is disabled 00:05:12.509 EAL: Heap on socket 0 was expanded by 6MB 00:05:12.509 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.510 EAL: request: mp_malloc_sync 00:05:12.510 EAL: No shared files mode enabled, IPC is disabled 00:05:12.510 EAL: Heap on socket 0 was shrunk by 6MB 00:05:12.510 EAL: Trying to obtain current memory policy. 00:05:12.510 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:12.510 EAL: Restoring previous memory policy: 4 00:05:12.510 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.510 EAL: request: mp_malloc_sync 00:05:12.510 EAL: No shared files mode enabled, IPC is disabled 00:05:12.510 EAL: Heap on socket 0 was expanded by 10MB 00:05:12.510 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.510 EAL: request: mp_malloc_sync 00:05:12.510 EAL: No shared files mode enabled, IPC is disabled 00:05:12.510 EAL: Heap on socket 0 was shrunk by 10MB 00:05:12.510 EAL: Trying to obtain current memory policy. 00:05:12.510 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:12.510 EAL: Restoring previous memory policy: 4 00:05:12.510 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.510 EAL: request: mp_malloc_sync 00:05:12.510 EAL: No shared files mode enabled, IPC is disabled 00:05:12.510 EAL: Heap on socket 0 was expanded by 18MB 00:05:12.510 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.510 EAL: request: mp_malloc_sync 00:05:12.510 EAL: No shared files mode enabled, IPC is disabled 00:05:12.510 EAL: Heap on socket 0 was shrunk by 18MB 00:05:12.510 EAL: Trying to obtain current memory policy. 00:05:12.510 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:12.510 EAL: Restoring previous memory policy: 4 00:05:12.510 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.510 EAL: request: mp_malloc_sync 00:05:12.510 EAL: No shared files mode enabled, IPC is disabled 00:05:12.510 EAL: Heap on socket 0 was expanded by 34MB 00:05:12.510 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.510 EAL: request: mp_malloc_sync 00:05:12.510 EAL: No shared files mode enabled, IPC is disabled 00:05:12.510 EAL: Heap on socket 0 was shrunk by 34MB 00:05:12.510 EAL: Trying to obtain current memory policy. 00:05:12.510 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:12.510 EAL: Restoring previous memory policy: 4 00:05:12.510 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.510 EAL: request: mp_malloc_sync 00:05:12.510 EAL: No shared files mode enabled, IPC is disabled 00:05:12.510 EAL: Heap on socket 0 was expanded by 66MB 00:05:12.769 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.769 EAL: request: mp_malloc_sync 00:05:12.769 EAL: No shared files mode enabled, IPC is disabled 00:05:12.769 EAL: Heap on socket 0 was shrunk by 66MB 00:05:12.769 EAL: Trying to obtain current memory policy. 00:05:12.769 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:12.769 EAL: Restoring previous memory policy: 4 00:05:12.769 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.769 EAL: request: mp_malloc_sync 00:05:12.769 EAL: No shared files mode enabled, IPC is disabled 00:05:12.769 EAL: Heap on socket 0 was expanded by 130MB 00:05:12.769 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.769 EAL: request: mp_malloc_sync 00:05:12.769 EAL: No shared files mode enabled, IPC is disabled 00:05:12.769 EAL: Heap on socket 0 was shrunk by 130MB 00:05:12.769 EAL: Trying to obtain current memory policy. 00:05:12.769 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:12.769 EAL: Restoring previous memory policy: 4 00:05:12.769 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.769 EAL: request: mp_malloc_sync 00:05:12.769 EAL: No shared files mode enabled, IPC is disabled 00:05:12.769 EAL: Heap on socket 0 was expanded by 258MB 00:05:12.769 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.769 EAL: request: mp_malloc_sync 00:05:12.769 EAL: No shared files mode enabled, IPC is disabled 00:05:12.769 EAL: Heap on socket 0 was shrunk by 258MB 00:05:12.769 EAL: Trying to obtain current memory policy. 00:05:12.769 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.028 EAL: Restoring previous memory policy: 4 00:05:13.028 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.028 EAL: request: mp_malloc_sync 00:05:13.028 EAL: No shared files mode enabled, IPC is disabled 00:05:13.028 EAL: Heap on socket 0 was expanded by 514MB 00:05:13.028 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.028 EAL: request: mp_malloc_sync 00:05:13.028 EAL: No shared files mode enabled, IPC is disabled 00:05:13.028 EAL: Heap on socket 0 was shrunk by 514MB 00:05:13.028 EAL: Trying to obtain current memory policy. 00:05:13.028 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.286 EAL: Restoring previous memory policy: 4 00:05:13.286 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.286 EAL: request: mp_malloc_sync 00:05:13.286 EAL: No shared files mode enabled, IPC is disabled 00:05:13.286 EAL: Heap on socket 0 was expanded by 1026MB 00:05:13.545 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.545 EAL: request: mp_malloc_sync 00:05:13.545 EAL: No shared files mode enabled, IPC is disabled 00:05:13.545 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:13.545 passed 00:05:13.545 00:05:13.545 Run Summary: Type Total Ran Passed Failed Inactive 00:05:13.545 suites 1 1 n/a 0 0 00:05:13.545 tests 2 2 2 0 0 00:05:13.545 asserts 497 497 497 0 n/a 00:05:13.545 00:05:13.545 Elapsed time = 0.963 seconds 00:05:13.545 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.545 EAL: request: mp_malloc_sync 00:05:13.545 EAL: No shared files mode enabled, IPC is disabled 00:05:13.545 EAL: Heap on socket 0 was shrunk by 2MB 00:05:13.545 EAL: No shared files mode enabled, IPC is disabled 00:05:13.545 EAL: No shared files mode enabled, IPC is disabled 00:05:13.545 EAL: No shared files mode enabled, IPC is disabled 00:05:13.545 00:05:13.545 real 0m1.078s 00:05:13.545 user 0m0.629s 00:05:13.545 sys 0m0.426s 00:05:13.545 17:14:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.545 17:14:33 -- common/autotest_common.sh@10 -- # set +x 00:05:13.545 ************************************ 00:05:13.545 END TEST env_vtophys 00:05:13.545 ************************************ 00:05:13.545 17:14:33 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:13.545 17:14:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.545 17:14:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.545 17:14:33 -- common/autotest_common.sh@10 -- # set +x 00:05:13.545 ************************************ 00:05:13.545 START TEST env_pci 00:05:13.545 ************************************ 00:05:13.545 17:14:33 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:13.545 00:05:13.545 00:05:13.545 CUnit - A unit testing framework for C - Version 2.1-3 00:05:13.545 http://cunit.sourceforge.net/ 00:05:13.545 00:05:13.545 00:05:13.545 Suite: pci 00:05:13.545 Test: pci_hook ...[2024-11-09 17:14:33.292644] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 341012 has claimed it 00:05:13.804 EAL: Cannot find device (10000:00:01.0) 00:05:13.804 EAL: Failed to attach device on primary process 00:05:13.804 passed 00:05:13.804 00:05:13.804 Run Summary: Type Total Ran Passed Failed Inactive 00:05:13.804 suites 1 1 n/a 0 0 00:05:13.804 tests 1 1 1 0 0 00:05:13.804 asserts 25 25 25 0 n/a 00:05:13.804 00:05:13.804 Elapsed time = 0.034 seconds 00:05:13.804 00:05:13.804 real 0m0.054s 00:05:13.804 user 0m0.011s 00:05:13.804 sys 0m0.043s 00:05:13.804 17:14:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.804 17:14:33 -- common/autotest_common.sh@10 -- # set +x 00:05:13.804 ************************************ 00:05:13.804 END TEST env_pci 00:05:13.804 ************************************ 00:05:13.804 17:14:33 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:13.804 17:14:33 -- env/env.sh@15 -- # uname 00:05:13.804 17:14:33 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:13.804 17:14:33 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:13.804 17:14:33 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:13.804 17:14:33 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:13.804 17:14:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.804 17:14:33 -- common/autotest_common.sh@10 -- # set +x 00:05:13.804 ************************************ 00:05:13.804 START TEST env_dpdk_post_init 00:05:13.804 ************************************ 00:05:13.804 17:14:33 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:13.804 EAL: Detected CPU lcores: 112 00:05:13.804 EAL: Detected NUMA nodes: 2 00:05:13.804 EAL: Detected static linkage of DPDK 00:05:13.804 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:13.804 EAL: Selected IOVA mode 'VA' 00:05:13.804 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.804 EAL: VFIO support initialized 00:05:13.804 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:13.804 EAL: Using IOMMU type 1 (Type 1) 00:05:14.742 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:18.029 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:18.029 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001000000 00:05:18.597 Starting DPDK initialization... 00:05:18.597 Starting SPDK post initialization... 00:05:18.597 SPDK NVMe probe 00:05:18.597 Attaching to 0000:d8:00.0 00:05:18.597 Attached to 0000:d8:00.0 00:05:18.597 Cleaning up... 00:05:18.597 00:05:18.597 real 0m4.753s 00:05:18.597 user 0m3.615s 00:05:18.597 sys 0m0.379s 00:05:18.597 17:14:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:18.597 17:14:38 -- common/autotest_common.sh@10 -- # set +x 00:05:18.597 ************************************ 00:05:18.597 END TEST env_dpdk_post_init 00:05:18.597 ************************************ 00:05:18.597 17:14:38 -- env/env.sh@26 -- # uname 00:05:18.597 17:14:38 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:18.597 17:14:38 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:18.597 17:14:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.597 17:14:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.597 17:14:38 -- common/autotest_common.sh@10 -- # set +x 00:05:18.597 ************************************ 00:05:18.597 START TEST env_mem_callbacks 00:05:18.597 ************************************ 00:05:18.597 17:14:38 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:18.597 EAL: Detected CPU lcores: 112 00:05:18.597 EAL: Detected NUMA nodes: 2 00:05:18.597 EAL: Detected static linkage of DPDK 00:05:18.597 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:18.597 EAL: Selected IOVA mode 'VA' 00:05:18.597 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.597 EAL: VFIO support initialized 00:05:18.597 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:18.597 00:05:18.597 00:05:18.597 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.597 http://cunit.sourceforge.net/ 00:05:18.597 00:05:18.597 00:05:18.597 Suite: memory 00:05:18.597 Test: test ... 00:05:18.597 register 0x200000200000 2097152 00:05:18.597 malloc 3145728 00:05:18.597 register 0x200000400000 4194304 00:05:18.597 buf 0x200000500000 len 3145728 PASSED 00:05:18.597 malloc 64 00:05:18.597 buf 0x2000004fff40 len 64 PASSED 00:05:18.597 malloc 4194304 00:05:18.597 register 0x200000800000 6291456 00:05:18.597 buf 0x200000a00000 len 4194304 PASSED 00:05:18.597 free 0x200000500000 3145728 00:05:18.597 free 0x2000004fff40 64 00:05:18.597 unregister 0x200000400000 4194304 PASSED 00:05:18.597 free 0x200000a00000 4194304 00:05:18.597 unregister 0x200000800000 6291456 PASSED 00:05:18.597 malloc 8388608 00:05:18.597 register 0x200000400000 10485760 00:05:18.597 buf 0x200000600000 len 8388608 PASSED 00:05:18.597 free 0x200000600000 8388608 00:05:18.597 unregister 0x200000400000 10485760 PASSED 00:05:18.597 passed 00:05:18.597 00:05:18.597 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.597 suites 1 1 n/a 0 0 00:05:18.597 tests 1 1 1 0 0 00:05:18.597 asserts 15 15 15 0 n/a 00:05:18.597 00:05:18.597 Elapsed time = 0.005 seconds 00:05:18.597 00:05:18.597 real 0m0.066s 00:05:18.597 user 0m0.024s 00:05:18.597 sys 0m0.042s 00:05:18.597 17:14:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:18.597 17:14:38 -- common/autotest_common.sh@10 -- # set +x 00:05:18.597 ************************************ 00:05:18.597 END TEST env_mem_callbacks 00:05:18.597 ************************************ 00:05:18.597 00:05:18.597 real 0m6.464s 00:05:18.597 user 0m4.533s 00:05:18.597 sys 0m1.198s 00:05:18.597 17:14:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:18.597 17:14:38 -- common/autotest_common.sh@10 -- # set +x 00:05:18.597 ************************************ 00:05:18.597 END TEST env 00:05:18.597 ************************************ 00:05:18.597 17:14:38 -- spdk/autotest.sh@163 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:18.597 17:14:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.597 17:14:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.597 17:14:38 -- common/autotest_common.sh@10 -- # set +x 00:05:18.597 ************************************ 00:05:18.597 START TEST rpc 00:05:18.597 ************************************ 00:05:18.597 17:14:38 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:18.856 * Looking for test storage... 00:05:18.856 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:18.856 17:14:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:18.856 17:14:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:18.856 17:14:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:18.856 17:14:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:18.856 17:14:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:18.856 17:14:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:18.856 17:14:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:18.856 17:14:38 -- scripts/common.sh@335 -- # IFS=.-: 00:05:18.856 17:14:38 -- scripts/common.sh@335 -- # read -ra ver1 00:05:18.856 17:14:38 -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.856 17:14:38 -- scripts/common.sh@336 -- # read -ra ver2 00:05:18.856 17:14:38 -- scripts/common.sh@337 -- # local 'op=<' 00:05:18.856 17:14:38 -- scripts/common.sh@339 -- # ver1_l=2 00:05:18.856 17:14:38 -- scripts/common.sh@340 -- # ver2_l=1 00:05:18.856 17:14:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:18.856 17:14:38 -- scripts/common.sh@343 -- # case "$op" in 00:05:18.856 17:14:38 -- scripts/common.sh@344 -- # : 1 00:05:18.856 17:14:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:18.856 17:14:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.856 17:14:38 -- scripts/common.sh@364 -- # decimal 1 00:05:18.856 17:14:38 -- scripts/common.sh@352 -- # local d=1 00:05:18.856 17:14:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.856 17:14:38 -- scripts/common.sh@354 -- # echo 1 00:05:18.856 17:14:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:18.856 17:14:38 -- scripts/common.sh@365 -- # decimal 2 00:05:18.856 17:14:38 -- scripts/common.sh@352 -- # local d=2 00:05:18.856 17:14:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.856 17:14:38 -- scripts/common.sh@354 -- # echo 2 00:05:18.856 17:14:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:18.856 17:14:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:18.856 17:14:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:18.856 17:14:38 -- scripts/common.sh@367 -- # return 0 00:05:18.856 17:14:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.856 17:14:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:18.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.856 --rc genhtml_branch_coverage=1 00:05:18.856 --rc genhtml_function_coverage=1 00:05:18.856 --rc genhtml_legend=1 00:05:18.856 --rc geninfo_all_blocks=1 00:05:18.856 --rc geninfo_unexecuted_blocks=1 00:05:18.856 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:18.856 ' 00:05:18.856 17:14:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:18.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.856 --rc genhtml_branch_coverage=1 00:05:18.856 --rc genhtml_function_coverage=1 00:05:18.856 --rc genhtml_legend=1 00:05:18.856 --rc geninfo_all_blocks=1 00:05:18.856 --rc geninfo_unexecuted_blocks=1 00:05:18.856 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:18.856 ' 00:05:18.856 17:14:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:18.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.856 --rc genhtml_branch_coverage=1 00:05:18.856 --rc genhtml_function_coverage=1 00:05:18.856 --rc genhtml_legend=1 00:05:18.856 --rc geninfo_all_blocks=1 00:05:18.856 --rc geninfo_unexecuted_blocks=1 00:05:18.856 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:18.856 ' 00:05:18.856 17:14:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:18.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.856 --rc genhtml_branch_coverage=1 00:05:18.856 --rc genhtml_function_coverage=1 00:05:18.856 --rc genhtml_legend=1 00:05:18.856 --rc geninfo_all_blocks=1 00:05:18.856 --rc geninfo_unexecuted_blocks=1 00:05:18.856 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:18.856 ' 00:05:18.856 17:14:38 -- rpc/rpc.sh@65 -- # spdk_pid=341989 00:05:18.856 17:14:38 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.856 17:14:38 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:18.856 17:14:38 -- rpc/rpc.sh@67 -- # waitforlisten 341989 00:05:18.856 17:14:38 -- common/autotest_common.sh@829 -- # '[' -z 341989 ']' 00:05:18.856 17:14:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.856 17:14:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.857 17:14:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.857 17:14:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.857 17:14:38 -- common/autotest_common.sh@10 -- # set +x 00:05:18.857 [2024-11-09 17:14:38.552460] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:18.857 [2024-11-09 17:14:38.552552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid341989 ] 00:05:18.857 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.857 [2024-11-09 17:14:38.619539] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.116 [2024-11-09 17:14:38.694972] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:19.116 [2024-11-09 17:14:38.695074] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:19.116 [2024-11-09 17:14:38.695085] app.c: 492:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 341989' to capture a snapshot of events at runtime. 00:05:19.116 [2024-11-09 17:14:38.695094] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid341989 for offline analysis/debug. 00:05:19.116 [2024-11-09 17:14:38.695113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.684 17:14:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.684 17:14:39 -- common/autotest_common.sh@862 -- # return 0 00:05:19.684 17:14:39 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:19.684 17:14:39 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:19.684 17:14:39 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:19.684 17:14:39 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:19.684 17:14:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.684 17:14:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.684 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:05:19.684 ************************************ 00:05:19.684 START TEST rpc_integrity 00:05:19.684 ************************************ 00:05:19.684 17:14:39 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:19.684 17:14:39 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:19.684 17:14:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.684 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:05:19.684 17:14:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.684 17:14:39 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:19.684 17:14:39 -- rpc/rpc.sh@13 -- # jq length 00:05:19.684 17:14:39 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:19.684 17:14:39 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:19.684 17:14:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.684 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:05:19.684 17:14:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.684 17:14:39 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:19.684 17:14:39 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:19.684 17:14:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.684 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:05:19.943 17:14:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.944 17:14:39 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:19.944 { 00:05:19.944 "name": "Malloc0", 00:05:19.944 "aliases": [ 00:05:19.944 "112957c4-b1ef-42a6-9935-ed408f3bd338" 00:05:19.944 ], 00:05:19.944 "product_name": "Malloc disk", 00:05:19.944 "block_size": 512, 00:05:19.944 "num_blocks": 16384, 00:05:19.944 "uuid": "112957c4-b1ef-42a6-9935-ed408f3bd338", 00:05:19.944 "assigned_rate_limits": { 00:05:19.944 "rw_ios_per_sec": 0, 00:05:19.944 "rw_mbytes_per_sec": 0, 00:05:19.944 "r_mbytes_per_sec": 0, 00:05:19.944 "w_mbytes_per_sec": 0 00:05:19.944 }, 00:05:19.944 "claimed": false, 00:05:19.944 "zoned": false, 00:05:19.944 "supported_io_types": { 00:05:19.944 "read": true, 00:05:19.944 "write": true, 00:05:19.944 "unmap": true, 00:05:19.944 "write_zeroes": true, 00:05:19.944 "flush": true, 00:05:19.944 "reset": true, 00:05:19.944 "compare": false, 00:05:19.944 "compare_and_write": false, 00:05:19.944 "abort": true, 00:05:19.944 "nvme_admin": false, 00:05:19.944 "nvme_io": false 00:05:19.944 }, 00:05:19.944 "memory_domains": [ 00:05:19.944 { 00:05:19.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.944 "dma_device_type": 2 00:05:19.944 } 00:05:19.944 ], 00:05:19.944 "driver_specific": {} 00:05:19.944 } 00:05:19.944 ]' 00:05:19.944 17:14:39 -- rpc/rpc.sh@17 -- # jq length 00:05:19.944 17:14:39 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:19.944 17:14:39 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:19.944 17:14:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.944 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:05:19.944 [2024-11-09 17:14:39.518286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:19.944 [2024-11-09 17:14:39.518322] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:19.944 [2024-11-09 17:14:39.518342] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4ccb030 00:05:19.944 [2024-11-09 17:14:39.518353] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:19.944 [2024-11-09 17:14:39.519180] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:19.944 [2024-11-09 17:14:39.519205] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:19.944 Passthru0 00:05:19.944 17:14:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.944 17:14:39 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:19.944 17:14:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.944 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:05:19.944 17:14:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.944 17:14:39 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:19.944 { 00:05:19.944 "name": "Malloc0", 00:05:19.944 "aliases": [ 00:05:19.944 "112957c4-b1ef-42a6-9935-ed408f3bd338" 00:05:19.944 ], 00:05:19.944 "product_name": "Malloc disk", 00:05:19.944 "block_size": 512, 00:05:19.944 "num_blocks": 16384, 00:05:19.944 "uuid": "112957c4-b1ef-42a6-9935-ed408f3bd338", 00:05:19.944 "assigned_rate_limits": { 00:05:19.944 "rw_ios_per_sec": 0, 00:05:19.944 "rw_mbytes_per_sec": 0, 00:05:19.944 "r_mbytes_per_sec": 0, 00:05:19.944 "w_mbytes_per_sec": 0 00:05:19.944 }, 00:05:19.944 "claimed": true, 00:05:19.944 "claim_type": "exclusive_write", 00:05:19.944 "zoned": false, 00:05:19.944 "supported_io_types": { 00:05:19.944 "read": true, 00:05:19.944 "write": true, 00:05:19.944 "unmap": true, 00:05:19.944 "write_zeroes": true, 00:05:19.944 "flush": true, 00:05:19.944 "reset": true, 00:05:19.944 "compare": false, 00:05:19.944 "compare_and_write": false, 00:05:19.944 "abort": true, 00:05:19.944 "nvme_admin": false, 00:05:19.944 "nvme_io": false 00:05:19.944 }, 00:05:19.944 "memory_domains": [ 00:05:19.944 { 00:05:19.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.944 "dma_device_type": 2 00:05:19.944 } 00:05:19.944 ], 00:05:19.944 "driver_specific": {} 00:05:19.944 }, 00:05:19.944 { 00:05:19.944 "name": "Passthru0", 00:05:19.944 "aliases": [ 00:05:19.944 "ad8a6aa5-4da8-5acb-bb58-2fd0fb487cf5" 00:05:19.944 ], 00:05:19.944 "product_name": "passthru", 00:05:19.944 "block_size": 512, 00:05:19.944 "num_blocks": 16384, 00:05:19.944 "uuid": "ad8a6aa5-4da8-5acb-bb58-2fd0fb487cf5", 00:05:19.944 "assigned_rate_limits": { 00:05:19.944 "rw_ios_per_sec": 0, 00:05:19.944 "rw_mbytes_per_sec": 0, 00:05:19.944 "r_mbytes_per_sec": 0, 00:05:19.944 "w_mbytes_per_sec": 0 00:05:19.944 }, 00:05:19.944 "claimed": false, 00:05:19.944 "zoned": false, 00:05:19.944 "supported_io_types": { 00:05:19.944 "read": true, 00:05:19.944 "write": true, 00:05:19.944 "unmap": true, 00:05:19.944 "write_zeroes": true, 00:05:19.944 "flush": true, 00:05:19.944 "reset": true, 00:05:19.944 "compare": false, 00:05:19.944 "compare_and_write": false, 00:05:19.944 "abort": true, 00:05:19.944 "nvme_admin": false, 00:05:19.944 "nvme_io": false 00:05:19.944 }, 00:05:19.944 "memory_domains": [ 00:05:19.944 { 00:05:19.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.944 "dma_device_type": 2 00:05:19.944 } 00:05:19.944 ], 00:05:19.944 "driver_specific": { 00:05:19.944 "passthru": { 00:05:19.944 "name": "Passthru0", 00:05:19.944 "base_bdev_name": "Malloc0" 00:05:19.944 } 00:05:19.944 } 00:05:19.944 } 00:05:19.944 ]' 00:05:19.944 17:14:39 -- rpc/rpc.sh@21 -- # jq length 00:05:19.944 17:14:39 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:19.944 17:14:39 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:19.944 17:14:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.944 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:05:19.944 17:14:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.944 17:14:39 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:19.944 17:14:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.944 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:05:19.944 17:14:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.944 17:14:39 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:19.944 17:14:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.944 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:05:19.944 17:14:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.944 17:14:39 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:19.944 17:14:39 -- rpc/rpc.sh@26 -- # jq length 00:05:19.944 17:14:39 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:19.944 00:05:19.944 real 0m0.280s 00:05:19.944 user 0m0.162s 00:05:19.944 sys 0m0.053s 00:05:19.944 17:14:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.944 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:05:19.944 ************************************ 00:05:19.944 END TEST rpc_integrity 00:05:19.944 ************************************ 00:05:20.202 17:14:39 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:20.202 17:14:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.202 17:14:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.202 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:05:20.202 ************************************ 00:05:20.202 START TEST rpc_plugins 00:05:20.202 ************************************ 00:05:20.202 17:14:39 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:20.202 17:14:39 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:20.202 17:14:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.202 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:05:20.202 17:14:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.202 17:14:39 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:20.202 17:14:39 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:20.202 17:14:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.202 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:05:20.202 17:14:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.202 17:14:39 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:20.202 { 00:05:20.202 "name": "Malloc1", 00:05:20.202 "aliases": [ 00:05:20.202 "3b08d696-a361-416a-b610-0bf94442128d" 00:05:20.202 ], 00:05:20.202 "product_name": "Malloc disk", 00:05:20.202 "block_size": 4096, 00:05:20.202 "num_blocks": 256, 00:05:20.202 "uuid": "3b08d696-a361-416a-b610-0bf94442128d", 00:05:20.202 "assigned_rate_limits": { 00:05:20.202 "rw_ios_per_sec": 0, 00:05:20.202 "rw_mbytes_per_sec": 0, 00:05:20.202 "r_mbytes_per_sec": 0, 00:05:20.202 "w_mbytes_per_sec": 0 00:05:20.202 }, 00:05:20.202 "claimed": false, 00:05:20.202 "zoned": false, 00:05:20.202 "supported_io_types": { 00:05:20.202 "read": true, 00:05:20.202 "write": true, 00:05:20.202 "unmap": true, 00:05:20.202 "write_zeroes": true, 00:05:20.202 "flush": true, 00:05:20.202 "reset": true, 00:05:20.202 "compare": false, 00:05:20.202 "compare_and_write": false, 00:05:20.202 "abort": true, 00:05:20.202 "nvme_admin": false, 00:05:20.202 "nvme_io": false 00:05:20.202 }, 00:05:20.202 "memory_domains": [ 00:05:20.202 { 00:05:20.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.202 "dma_device_type": 2 00:05:20.202 } 00:05:20.202 ], 00:05:20.202 "driver_specific": {} 00:05:20.202 } 00:05:20.202 ]' 00:05:20.202 17:14:39 -- rpc/rpc.sh@32 -- # jq length 00:05:20.202 17:14:39 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:20.202 17:14:39 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:20.202 17:14:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.202 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:05:20.202 17:14:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.202 17:14:39 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:20.202 17:14:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.202 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:05:20.202 17:14:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.202 17:14:39 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:20.203 17:14:39 -- rpc/rpc.sh@36 -- # jq length 00:05:20.203 17:14:39 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:20.203 00:05:20.203 real 0m0.140s 00:05:20.203 user 0m0.084s 00:05:20.203 sys 0m0.021s 00:05:20.203 17:14:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:20.203 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:05:20.203 ************************************ 00:05:20.203 END TEST rpc_plugins 00:05:20.203 ************************************ 00:05:20.203 17:14:39 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:20.203 17:14:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.203 17:14:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.203 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:05:20.203 ************************************ 00:05:20.203 START TEST rpc_trace_cmd_test 00:05:20.203 ************************************ 00:05:20.203 17:14:39 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:20.203 17:14:39 -- rpc/rpc.sh@40 -- # local info 00:05:20.203 17:14:39 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:20.203 17:14:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.203 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:05:20.203 17:14:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.203 17:14:39 -- rpc/rpc.sh@42 -- # info='{ 00:05:20.203 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid341989", 00:05:20.203 "tpoint_group_mask": "0x8", 00:05:20.203 "iscsi_conn": { 00:05:20.203 "mask": "0x2", 00:05:20.203 "tpoint_mask": "0x0" 00:05:20.203 }, 00:05:20.203 "scsi": { 00:05:20.203 "mask": "0x4", 00:05:20.203 "tpoint_mask": "0x0" 00:05:20.203 }, 00:05:20.203 "bdev": { 00:05:20.203 "mask": "0x8", 00:05:20.203 "tpoint_mask": "0xffffffffffffffff" 00:05:20.203 }, 00:05:20.203 "nvmf_rdma": { 00:05:20.203 "mask": "0x10", 00:05:20.203 "tpoint_mask": "0x0" 00:05:20.203 }, 00:05:20.203 "nvmf_tcp": { 00:05:20.203 "mask": "0x20", 00:05:20.203 "tpoint_mask": "0x0" 00:05:20.203 }, 00:05:20.203 "ftl": { 00:05:20.203 "mask": "0x40", 00:05:20.203 "tpoint_mask": "0x0" 00:05:20.203 }, 00:05:20.203 "blobfs": { 00:05:20.203 "mask": "0x80", 00:05:20.203 "tpoint_mask": "0x0" 00:05:20.203 }, 00:05:20.203 "dsa": { 00:05:20.203 "mask": "0x200", 00:05:20.203 "tpoint_mask": "0x0" 00:05:20.203 }, 00:05:20.203 "thread": { 00:05:20.203 "mask": "0x400", 00:05:20.203 "tpoint_mask": "0x0" 00:05:20.203 }, 00:05:20.203 "nvme_pcie": { 00:05:20.203 "mask": "0x800", 00:05:20.203 "tpoint_mask": "0x0" 00:05:20.203 }, 00:05:20.203 "iaa": { 00:05:20.203 "mask": "0x1000", 00:05:20.203 "tpoint_mask": "0x0" 00:05:20.203 }, 00:05:20.203 "nvme_tcp": { 00:05:20.203 "mask": "0x2000", 00:05:20.203 "tpoint_mask": "0x0" 00:05:20.203 }, 00:05:20.203 "bdev_nvme": { 00:05:20.203 "mask": "0x4000", 00:05:20.203 "tpoint_mask": "0x0" 00:05:20.203 } 00:05:20.203 }' 00:05:20.203 17:14:39 -- rpc/rpc.sh@43 -- # jq length 00:05:20.461 17:14:39 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:20.461 17:14:39 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:20.461 17:14:40 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:20.461 17:14:40 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:20.461 17:14:40 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:20.461 17:14:40 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:20.461 17:14:40 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:20.461 17:14:40 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:20.461 17:14:40 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:20.461 00:05:20.461 real 0m0.233s 00:05:20.461 user 0m0.189s 00:05:20.461 sys 0m0.036s 00:05:20.461 17:14:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:20.461 17:14:40 -- common/autotest_common.sh@10 -- # set +x 00:05:20.461 ************************************ 00:05:20.461 END TEST rpc_trace_cmd_test 00:05:20.461 ************************************ 00:05:20.461 17:14:40 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:20.461 17:14:40 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:20.461 17:14:40 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:20.461 17:14:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.461 17:14:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.461 17:14:40 -- common/autotest_common.sh@10 -- # set +x 00:05:20.461 ************************************ 00:05:20.461 START TEST rpc_daemon_integrity 00:05:20.461 ************************************ 00:05:20.461 17:14:40 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:20.461 17:14:40 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:20.461 17:14:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.461 17:14:40 -- common/autotest_common.sh@10 -- # set +x 00:05:20.461 17:14:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.461 17:14:40 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:20.461 17:14:40 -- rpc/rpc.sh@13 -- # jq length 00:05:20.720 17:14:40 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:20.720 17:14:40 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:20.720 17:14:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.720 17:14:40 -- common/autotest_common.sh@10 -- # set +x 00:05:20.720 17:14:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.720 17:14:40 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:20.720 17:14:40 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:20.720 17:14:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.720 17:14:40 -- common/autotest_common.sh@10 -- # set +x 00:05:20.720 17:14:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.720 17:14:40 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:20.720 { 00:05:20.720 "name": "Malloc2", 00:05:20.720 "aliases": [ 00:05:20.720 "8baed564-5233-4062-81a0-447c85c97835" 00:05:20.720 ], 00:05:20.720 "product_name": "Malloc disk", 00:05:20.720 "block_size": 512, 00:05:20.720 "num_blocks": 16384, 00:05:20.720 "uuid": "8baed564-5233-4062-81a0-447c85c97835", 00:05:20.720 "assigned_rate_limits": { 00:05:20.720 "rw_ios_per_sec": 0, 00:05:20.720 "rw_mbytes_per_sec": 0, 00:05:20.720 "r_mbytes_per_sec": 0, 00:05:20.720 "w_mbytes_per_sec": 0 00:05:20.720 }, 00:05:20.720 "claimed": false, 00:05:20.720 "zoned": false, 00:05:20.720 "supported_io_types": { 00:05:20.720 "read": true, 00:05:20.720 "write": true, 00:05:20.720 "unmap": true, 00:05:20.720 "write_zeroes": true, 00:05:20.720 "flush": true, 00:05:20.720 "reset": true, 00:05:20.720 "compare": false, 00:05:20.720 "compare_and_write": false, 00:05:20.720 "abort": true, 00:05:20.720 "nvme_admin": false, 00:05:20.720 "nvme_io": false 00:05:20.720 }, 00:05:20.720 "memory_domains": [ 00:05:20.720 { 00:05:20.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.720 "dma_device_type": 2 00:05:20.720 } 00:05:20.720 ], 00:05:20.720 "driver_specific": {} 00:05:20.720 } 00:05:20.720 ]' 00:05:20.720 17:14:40 -- rpc/rpc.sh@17 -- # jq length 00:05:20.720 17:14:40 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:20.720 17:14:40 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:20.720 17:14:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.720 17:14:40 -- common/autotest_common.sh@10 -- # set +x 00:05:20.720 [2024-11-09 17:14:40.324475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:20.720 [2024-11-09 17:14:40.324508] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:20.720 [2024-11-09 17:14:40.324528] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4e54980 00:05:20.720 [2024-11-09 17:14:40.324539] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:20.720 [2024-11-09 17:14:40.325242] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:20.720 [2024-11-09 17:14:40.325264] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:20.720 Passthru0 00:05:20.720 17:14:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.720 17:14:40 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:20.720 17:14:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.720 17:14:40 -- common/autotest_common.sh@10 -- # set +x 00:05:20.720 17:14:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.720 17:14:40 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:20.720 { 00:05:20.720 "name": "Malloc2", 00:05:20.720 "aliases": [ 00:05:20.720 "8baed564-5233-4062-81a0-447c85c97835" 00:05:20.720 ], 00:05:20.720 "product_name": "Malloc disk", 00:05:20.720 "block_size": 512, 00:05:20.720 "num_blocks": 16384, 00:05:20.720 "uuid": "8baed564-5233-4062-81a0-447c85c97835", 00:05:20.720 "assigned_rate_limits": { 00:05:20.720 "rw_ios_per_sec": 0, 00:05:20.720 "rw_mbytes_per_sec": 0, 00:05:20.720 "r_mbytes_per_sec": 0, 00:05:20.720 "w_mbytes_per_sec": 0 00:05:20.720 }, 00:05:20.720 "claimed": true, 00:05:20.720 "claim_type": "exclusive_write", 00:05:20.720 "zoned": false, 00:05:20.720 "supported_io_types": { 00:05:20.720 "read": true, 00:05:20.720 "write": true, 00:05:20.720 "unmap": true, 00:05:20.720 "write_zeroes": true, 00:05:20.720 "flush": true, 00:05:20.720 "reset": true, 00:05:20.720 "compare": false, 00:05:20.720 "compare_and_write": false, 00:05:20.720 "abort": true, 00:05:20.720 "nvme_admin": false, 00:05:20.720 "nvme_io": false 00:05:20.720 }, 00:05:20.720 "memory_domains": [ 00:05:20.720 { 00:05:20.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.720 "dma_device_type": 2 00:05:20.720 } 00:05:20.720 ], 00:05:20.720 "driver_specific": {} 00:05:20.720 }, 00:05:20.720 { 00:05:20.720 "name": "Passthru0", 00:05:20.720 "aliases": [ 00:05:20.720 "3005f76c-dfd9-5508-8825-140131e98275" 00:05:20.720 ], 00:05:20.720 "product_name": "passthru", 00:05:20.720 "block_size": 512, 00:05:20.720 "num_blocks": 16384, 00:05:20.720 "uuid": "3005f76c-dfd9-5508-8825-140131e98275", 00:05:20.720 "assigned_rate_limits": { 00:05:20.720 "rw_ios_per_sec": 0, 00:05:20.720 "rw_mbytes_per_sec": 0, 00:05:20.720 "r_mbytes_per_sec": 0, 00:05:20.720 "w_mbytes_per_sec": 0 00:05:20.720 }, 00:05:20.720 "claimed": false, 00:05:20.720 "zoned": false, 00:05:20.720 "supported_io_types": { 00:05:20.720 "read": true, 00:05:20.720 "write": true, 00:05:20.720 "unmap": true, 00:05:20.720 "write_zeroes": true, 00:05:20.720 "flush": true, 00:05:20.720 "reset": true, 00:05:20.720 "compare": false, 00:05:20.720 "compare_and_write": false, 00:05:20.720 "abort": true, 00:05:20.720 "nvme_admin": false, 00:05:20.720 "nvme_io": false 00:05:20.720 }, 00:05:20.720 "memory_domains": [ 00:05:20.720 { 00:05:20.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.720 "dma_device_type": 2 00:05:20.720 } 00:05:20.720 ], 00:05:20.720 "driver_specific": { 00:05:20.720 "passthru": { 00:05:20.720 "name": "Passthru0", 00:05:20.720 "base_bdev_name": "Malloc2" 00:05:20.720 } 00:05:20.720 } 00:05:20.720 } 00:05:20.720 ]' 00:05:20.721 17:14:40 -- rpc/rpc.sh@21 -- # jq length 00:05:20.721 17:14:40 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:20.721 17:14:40 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:20.721 17:14:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.721 17:14:40 -- common/autotest_common.sh@10 -- # set +x 00:05:20.721 17:14:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.721 17:14:40 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:20.721 17:14:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.721 17:14:40 -- common/autotest_common.sh@10 -- # set +x 00:05:20.721 17:14:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.721 17:14:40 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:20.721 17:14:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.721 17:14:40 -- common/autotest_common.sh@10 -- # set +x 00:05:20.721 17:14:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.721 17:14:40 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:20.721 17:14:40 -- rpc/rpc.sh@26 -- # jq length 00:05:20.721 17:14:40 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:20.721 00:05:20.721 real 0m0.282s 00:05:20.721 user 0m0.167s 00:05:20.721 sys 0m0.053s 00:05:20.721 17:14:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:20.721 17:14:40 -- common/autotest_common.sh@10 -- # set +x 00:05:20.721 ************************************ 00:05:20.721 END TEST rpc_daemon_integrity 00:05:20.721 ************************************ 00:05:20.980 17:14:40 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:20.980 17:14:40 -- rpc/rpc.sh@84 -- # killprocess 341989 00:05:20.980 17:14:40 -- common/autotest_common.sh@936 -- # '[' -z 341989 ']' 00:05:20.980 17:14:40 -- common/autotest_common.sh@940 -- # kill -0 341989 00:05:20.980 17:14:40 -- common/autotest_common.sh@941 -- # uname 00:05:20.980 17:14:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:20.980 17:14:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 341989 00:05:20.980 17:14:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:20.980 17:14:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:20.980 17:14:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 341989' 00:05:20.980 killing process with pid 341989 00:05:20.980 17:14:40 -- common/autotest_common.sh@955 -- # kill 341989 00:05:20.980 17:14:40 -- common/autotest_common.sh@960 -- # wait 341989 00:05:21.239 00:05:21.239 real 0m2.544s 00:05:21.239 user 0m3.189s 00:05:21.239 sys 0m0.765s 00:05:21.239 17:14:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.239 17:14:40 -- common/autotest_common.sh@10 -- # set +x 00:05:21.239 ************************************ 00:05:21.239 END TEST rpc 00:05:21.239 ************************************ 00:05:21.239 17:14:40 -- spdk/autotest.sh@164 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:21.239 17:14:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.239 17:14:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.239 17:14:40 -- common/autotest_common.sh@10 -- # set +x 00:05:21.239 ************************************ 00:05:21.239 START TEST rpc_client 00:05:21.239 ************************************ 00:05:21.239 17:14:40 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:21.499 * Looking for test storage... 00:05:21.499 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:05:21.499 17:14:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:21.499 17:14:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:21.499 17:14:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:21.499 17:14:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:21.499 17:14:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:21.499 17:14:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:21.499 17:14:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:21.499 17:14:41 -- scripts/common.sh@335 -- # IFS=.-: 00:05:21.499 17:14:41 -- scripts/common.sh@335 -- # read -ra ver1 00:05:21.499 17:14:41 -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.499 17:14:41 -- scripts/common.sh@336 -- # read -ra ver2 00:05:21.499 17:14:41 -- scripts/common.sh@337 -- # local 'op=<' 00:05:21.499 17:14:41 -- scripts/common.sh@339 -- # ver1_l=2 00:05:21.499 17:14:41 -- scripts/common.sh@340 -- # ver2_l=1 00:05:21.499 17:14:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:21.499 17:14:41 -- scripts/common.sh@343 -- # case "$op" in 00:05:21.499 17:14:41 -- scripts/common.sh@344 -- # : 1 00:05:21.499 17:14:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:21.499 17:14:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.499 17:14:41 -- scripts/common.sh@364 -- # decimal 1 00:05:21.499 17:14:41 -- scripts/common.sh@352 -- # local d=1 00:05:21.499 17:14:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.499 17:14:41 -- scripts/common.sh@354 -- # echo 1 00:05:21.499 17:14:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:21.499 17:14:41 -- scripts/common.sh@365 -- # decimal 2 00:05:21.499 17:14:41 -- scripts/common.sh@352 -- # local d=2 00:05:21.499 17:14:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.499 17:14:41 -- scripts/common.sh@354 -- # echo 2 00:05:21.499 17:14:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:21.499 17:14:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:21.499 17:14:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:21.499 17:14:41 -- scripts/common.sh@367 -- # return 0 00:05:21.499 17:14:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.499 17:14:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:21.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.499 --rc genhtml_branch_coverage=1 00:05:21.499 --rc genhtml_function_coverage=1 00:05:21.499 --rc genhtml_legend=1 00:05:21.499 --rc geninfo_all_blocks=1 00:05:21.499 --rc geninfo_unexecuted_blocks=1 00:05:21.499 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:21.499 ' 00:05:21.500 17:14:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:21.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.500 --rc genhtml_branch_coverage=1 00:05:21.500 --rc genhtml_function_coverage=1 00:05:21.500 --rc genhtml_legend=1 00:05:21.500 --rc geninfo_all_blocks=1 00:05:21.500 --rc geninfo_unexecuted_blocks=1 00:05:21.500 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:21.500 ' 00:05:21.500 17:14:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:21.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.500 --rc genhtml_branch_coverage=1 00:05:21.500 --rc genhtml_function_coverage=1 00:05:21.500 --rc genhtml_legend=1 00:05:21.500 --rc geninfo_all_blocks=1 00:05:21.500 --rc geninfo_unexecuted_blocks=1 00:05:21.500 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:21.500 ' 00:05:21.500 17:14:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:21.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.500 --rc genhtml_branch_coverage=1 00:05:21.500 --rc genhtml_function_coverage=1 00:05:21.500 --rc genhtml_legend=1 00:05:21.500 --rc geninfo_all_blocks=1 00:05:21.500 --rc geninfo_unexecuted_blocks=1 00:05:21.500 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:21.500 ' 00:05:21.500 17:14:41 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:21.500 OK 00:05:21.500 17:14:41 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:21.500 00:05:21.500 real 0m0.196s 00:05:21.500 user 0m0.104s 00:05:21.500 sys 0m0.104s 00:05:21.500 17:14:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.500 17:14:41 -- common/autotest_common.sh@10 -- # set +x 00:05:21.500 ************************************ 00:05:21.500 END TEST rpc_client 00:05:21.500 ************************************ 00:05:21.500 17:14:41 -- spdk/autotest.sh@165 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:21.500 17:14:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.500 17:14:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.500 17:14:41 -- common/autotest_common.sh@10 -- # set +x 00:05:21.500 ************************************ 00:05:21.500 START TEST json_config 00:05:21.500 ************************************ 00:05:21.500 17:14:41 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:21.500 17:14:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:21.500 17:14:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:21.500 17:14:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:21.759 17:14:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:21.759 17:14:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:21.759 17:14:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:21.759 17:14:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:21.760 17:14:41 -- scripts/common.sh@335 -- # IFS=.-: 00:05:21.760 17:14:41 -- scripts/common.sh@335 -- # read -ra ver1 00:05:21.760 17:14:41 -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.760 17:14:41 -- scripts/common.sh@336 -- # read -ra ver2 00:05:21.760 17:14:41 -- scripts/common.sh@337 -- # local 'op=<' 00:05:21.760 17:14:41 -- scripts/common.sh@339 -- # ver1_l=2 00:05:21.760 17:14:41 -- scripts/common.sh@340 -- # ver2_l=1 00:05:21.760 17:14:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:21.760 17:14:41 -- scripts/common.sh@343 -- # case "$op" in 00:05:21.760 17:14:41 -- scripts/common.sh@344 -- # : 1 00:05:21.760 17:14:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:21.760 17:14:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.760 17:14:41 -- scripts/common.sh@364 -- # decimal 1 00:05:21.760 17:14:41 -- scripts/common.sh@352 -- # local d=1 00:05:21.760 17:14:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.760 17:14:41 -- scripts/common.sh@354 -- # echo 1 00:05:21.760 17:14:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:21.760 17:14:41 -- scripts/common.sh@365 -- # decimal 2 00:05:21.760 17:14:41 -- scripts/common.sh@352 -- # local d=2 00:05:21.760 17:14:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.760 17:14:41 -- scripts/common.sh@354 -- # echo 2 00:05:21.760 17:14:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:21.760 17:14:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:21.760 17:14:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:21.760 17:14:41 -- scripts/common.sh@367 -- # return 0 00:05:21.760 17:14:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.760 17:14:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:21.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.760 --rc genhtml_branch_coverage=1 00:05:21.760 --rc genhtml_function_coverage=1 00:05:21.760 --rc genhtml_legend=1 00:05:21.760 --rc geninfo_all_blocks=1 00:05:21.760 --rc geninfo_unexecuted_blocks=1 00:05:21.760 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:21.760 ' 00:05:21.760 17:14:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:21.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.760 --rc genhtml_branch_coverage=1 00:05:21.760 --rc genhtml_function_coverage=1 00:05:21.760 --rc genhtml_legend=1 00:05:21.760 --rc geninfo_all_blocks=1 00:05:21.760 --rc geninfo_unexecuted_blocks=1 00:05:21.760 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:21.760 ' 00:05:21.760 17:14:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:21.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.760 --rc genhtml_branch_coverage=1 00:05:21.760 --rc genhtml_function_coverage=1 00:05:21.760 --rc genhtml_legend=1 00:05:21.760 --rc geninfo_all_blocks=1 00:05:21.760 --rc geninfo_unexecuted_blocks=1 00:05:21.760 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:21.760 ' 00:05:21.760 17:14:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:21.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.760 --rc genhtml_branch_coverage=1 00:05:21.760 --rc genhtml_function_coverage=1 00:05:21.760 --rc genhtml_legend=1 00:05:21.760 --rc geninfo_all_blocks=1 00:05:21.760 --rc geninfo_unexecuted_blocks=1 00:05:21.760 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:21.760 ' 00:05:21.760 17:14:41 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:21.760 17:14:41 -- nvmf/common.sh@7 -- # uname -s 00:05:21.760 17:14:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:21.760 17:14:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:21.760 17:14:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:21.760 17:14:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:21.760 17:14:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:21.760 17:14:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:21.760 17:14:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:21.760 17:14:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:21.760 17:14:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:21.760 17:14:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:21.760 17:14:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:21.760 17:14:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:21.760 17:14:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:21.760 17:14:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:21.760 17:14:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:21.760 17:14:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:21.760 17:14:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:21.760 17:14:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:21.760 17:14:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:21.760 17:14:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.760 17:14:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.760 17:14:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.760 17:14:41 -- paths/export.sh@5 -- # export PATH 00:05:21.760 17:14:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.760 17:14:41 -- nvmf/common.sh@46 -- # : 0 00:05:21.760 17:14:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:21.760 17:14:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:21.760 17:14:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:21.760 17:14:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:21.760 17:14:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:21.760 17:14:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:21.760 17:14:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:21.760 17:14:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:21.760 17:14:41 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:21.760 17:14:41 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:21.760 17:14:41 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:21.760 17:14:41 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:21.760 17:14:41 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:21.760 WARNING: No tests are enabled so not running JSON configuration tests 00:05:21.760 17:14:41 -- json_config/json_config.sh@27 -- # exit 0 00:05:21.760 00:05:21.760 real 0m0.178s 00:05:21.760 user 0m0.116s 00:05:21.760 sys 0m0.070s 00:05:21.760 17:14:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.760 17:14:41 -- common/autotest_common.sh@10 -- # set +x 00:05:21.760 ************************************ 00:05:21.760 END TEST json_config 00:05:21.760 ************************************ 00:05:21.760 17:14:41 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:21.760 17:14:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.760 17:14:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.760 17:14:41 -- common/autotest_common.sh@10 -- # set +x 00:05:21.760 ************************************ 00:05:21.760 START TEST json_config_extra_key 00:05:21.760 ************************************ 00:05:21.760 17:14:41 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:21.760 17:14:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:21.760 17:14:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:21.760 17:14:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:21.760 17:14:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:21.760 17:14:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:21.760 17:14:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:21.760 17:14:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:21.760 17:14:41 -- scripts/common.sh@335 -- # IFS=.-: 00:05:21.760 17:14:41 -- scripts/common.sh@335 -- # read -ra ver1 00:05:21.760 17:14:41 -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.760 17:14:41 -- scripts/common.sh@336 -- # read -ra ver2 00:05:21.760 17:14:41 -- scripts/common.sh@337 -- # local 'op=<' 00:05:21.760 17:14:41 -- scripts/common.sh@339 -- # ver1_l=2 00:05:21.760 17:14:41 -- scripts/common.sh@340 -- # ver2_l=1 00:05:21.760 17:14:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:21.760 17:14:41 -- scripts/common.sh@343 -- # case "$op" in 00:05:21.760 17:14:41 -- scripts/common.sh@344 -- # : 1 00:05:21.760 17:14:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:21.760 17:14:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.021 17:14:41 -- scripts/common.sh@364 -- # decimal 1 00:05:22.021 17:14:41 -- scripts/common.sh@352 -- # local d=1 00:05:22.021 17:14:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.021 17:14:41 -- scripts/common.sh@354 -- # echo 1 00:05:22.021 17:14:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:22.021 17:14:41 -- scripts/common.sh@365 -- # decimal 2 00:05:22.021 17:14:41 -- scripts/common.sh@352 -- # local d=2 00:05:22.021 17:14:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.021 17:14:41 -- scripts/common.sh@354 -- # echo 2 00:05:22.021 17:14:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:22.021 17:14:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:22.021 17:14:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:22.021 17:14:41 -- scripts/common.sh@367 -- # return 0 00:05:22.021 17:14:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.021 17:14:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:22.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.021 --rc genhtml_branch_coverage=1 00:05:22.021 --rc genhtml_function_coverage=1 00:05:22.021 --rc genhtml_legend=1 00:05:22.021 --rc geninfo_all_blocks=1 00:05:22.021 --rc geninfo_unexecuted_blocks=1 00:05:22.021 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:22.021 ' 00:05:22.021 17:14:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:22.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.021 --rc genhtml_branch_coverage=1 00:05:22.021 --rc genhtml_function_coverage=1 00:05:22.021 --rc genhtml_legend=1 00:05:22.021 --rc geninfo_all_blocks=1 00:05:22.021 --rc geninfo_unexecuted_blocks=1 00:05:22.021 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:22.021 ' 00:05:22.021 17:14:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:22.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.021 --rc genhtml_branch_coverage=1 00:05:22.021 --rc genhtml_function_coverage=1 00:05:22.021 --rc genhtml_legend=1 00:05:22.021 --rc geninfo_all_blocks=1 00:05:22.021 --rc geninfo_unexecuted_blocks=1 00:05:22.021 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:22.021 ' 00:05:22.021 17:14:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:22.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.021 --rc genhtml_branch_coverage=1 00:05:22.021 --rc genhtml_function_coverage=1 00:05:22.021 --rc genhtml_legend=1 00:05:22.021 --rc geninfo_all_blocks=1 00:05:22.021 --rc geninfo_unexecuted_blocks=1 00:05:22.021 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:22.021 ' 00:05:22.021 17:14:41 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:22.021 17:14:41 -- nvmf/common.sh@7 -- # uname -s 00:05:22.021 17:14:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:22.021 17:14:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:22.021 17:14:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:22.021 17:14:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:22.021 17:14:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:22.021 17:14:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:22.021 17:14:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:22.021 17:14:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:22.021 17:14:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:22.021 17:14:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:22.021 17:14:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:22.021 17:14:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:22.021 17:14:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:22.021 17:14:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:22.021 17:14:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:22.021 17:14:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:22.021 17:14:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:22.021 17:14:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:22.021 17:14:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:22.021 17:14:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.021 17:14:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.021 17:14:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.021 17:14:41 -- paths/export.sh@5 -- # export PATH 00:05:22.021 17:14:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.021 17:14:41 -- nvmf/common.sh@46 -- # : 0 00:05:22.021 17:14:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:22.021 17:14:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:22.021 17:14:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:22.021 17:14:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:22.021 17:14:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:22.021 17:14:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:22.021 17:14:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:22.021 17:14:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:22.021 17:14:41 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:22.021 17:14:41 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:22.021 17:14:41 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:22.021 17:14:41 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:22.021 17:14:41 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:22.021 17:14:41 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:22.021 17:14:41 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:22.021 17:14:41 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:22.022 17:14:41 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:22.022 17:14:41 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:22.022 INFO: launching applications... 00:05:22.022 17:14:41 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:22.022 17:14:41 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:22.022 17:14:41 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:22.022 17:14:41 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:22.022 17:14:41 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:22.022 17:14:41 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=342792 00:05:22.022 17:14:41 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:22.022 Waiting for target to run... 00:05:22.022 17:14:41 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 342792 /var/tmp/spdk_tgt.sock 00:05:22.022 17:14:41 -- common/autotest_common.sh@829 -- # '[' -z 342792 ']' 00:05:22.022 17:14:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:22.022 17:14:41 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:22.022 17:14:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.022 17:14:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:22.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:22.022 17:14:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.022 17:14:41 -- common/autotest_common.sh@10 -- # set +x 00:05:22.022 [2024-11-09 17:14:41.603863] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:22.022 [2024-11-09 17:14:41.603947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid342792 ] 00:05:22.022 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.281 [2024-11-09 17:14:42.040700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.539 [2024-11-09 17:14:42.131874] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:22.539 [2024-11-09 17:14:42.131972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.799 17:14:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.799 17:14:42 -- common/autotest_common.sh@862 -- # return 0 00:05:22.799 17:14:42 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:22.799 00:05:22.799 17:14:42 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:22.799 INFO: shutting down applications... 00:05:22.799 17:14:42 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:22.799 17:14:42 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:22.799 17:14:42 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:22.799 17:14:42 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 342792 ]] 00:05:22.799 17:14:42 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 342792 00:05:22.799 17:14:42 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:22.799 17:14:42 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:22.799 17:14:42 -- json_config/json_config_extra_key.sh@50 -- # kill -0 342792 00:05:22.799 17:14:42 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:23.368 17:14:42 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:23.368 17:14:42 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:23.368 17:14:42 -- json_config/json_config_extra_key.sh@50 -- # kill -0 342792 00:05:23.368 17:14:42 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:23.368 17:14:42 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:23.368 17:14:42 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:23.368 17:14:42 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:23.368 SPDK target shutdown done 00:05:23.368 17:14:42 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:23.368 Success 00:05:23.368 00:05:23.368 real 0m1.554s 00:05:23.368 user 0m1.138s 00:05:23.368 sys 0m0.580s 00:05:23.368 17:14:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:23.368 17:14:42 -- common/autotest_common.sh@10 -- # set +x 00:05:23.368 ************************************ 00:05:23.368 END TEST json_config_extra_key 00:05:23.368 ************************************ 00:05:23.368 17:14:42 -- spdk/autotest.sh@167 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:23.368 17:14:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.368 17:14:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.368 17:14:42 -- common/autotest_common.sh@10 -- # set +x 00:05:23.368 ************************************ 00:05:23.368 START TEST alias_rpc 00:05:23.368 ************************************ 00:05:23.368 17:14:42 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:23.368 * Looking for test storage... 00:05:23.368 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:05:23.368 17:14:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:23.368 17:14:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:23.368 17:14:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:23.628 17:14:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:23.628 17:14:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:23.628 17:14:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:23.628 17:14:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:23.628 17:14:43 -- scripts/common.sh@335 -- # IFS=.-: 00:05:23.628 17:14:43 -- scripts/common.sh@335 -- # read -ra ver1 00:05:23.628 17:14:43 -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.628 17:14:43 -- scripts/common.sh@336 -- # read -ra ver2 00:05:23.628 17:14:43 -- scripts/common.sh@337 -- # local 'op=<' 00:05:23.628 17:14:43 -- scripts/common.sh@339 -- # ver1_l=2 00:05:23.628 17:14:43 -- scripts/common.sh@340 -- # ver2_l=1 00:05:23.628 17:14:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:23.628 17:14:43 -- scripts/common.sh@343 -- # case "$op" in 00:05:23.628 17:14:43 -- scripts/common.sh@344 -- # : 1 00:05:23.628 17:14:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:23.628 17:14:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.628 17:14:43 -- scripts/common.sh@364 -- # decimal 1 00:05:23.628 17:14:43 -- scripts/common.sh@352 -- # local d=1 00:05:23.628 17:14:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.628 17:14:43 -- scripts/common.sh@354 -- # echo 1 00:05:23.628 17:14:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:23.628 17:14:43 -- scripts/common.sh@365 -- # decimal 2 00:05:23.628 17:14:43 -- scripts/common.sh@352 -- # local d=2 00:05:23.628 17:14:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.628 17:14:43 -- scripts/common.sh@354 -- # echo 2 00:05:23.628 17:14:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:23.628 17:14:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:23.628 17:14:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:23.628 17:14:43 -- scripts/common.sh@367 -- # return 0 00:05:23.628 17:14:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.628 17:14:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:23.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.628 --rc genhtml_branch_coverage=1 00:05:23.628 --rc genhtml_function_coverage=1 00:05:23.628 --rc genhtml_legend=1 00:05:23.628 --rc geninfo_all_blocks=1 00:05:23.628 --rc geninfo_unexecuted_blocks=1 00:05:23.628 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:23.628 ' 00:05:23.628 17:14:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:23.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.628 --rc genhtml_branch_coverage=1 00:05:23.628 --rc genhtml_function_coverage=1 00:05:23.628 --rc genhtml_legend=1 00:05:23.628 --rc geninfo_all_blocks=1 00:05:23.628 --rc geninfo_unexecuted_blocks=1 00:05:23.628 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:23.628 ' 00:05:23.628 17:14:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:23.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.628 --rc genhtml_branch_coverage=1 00:05:23.628 --rc genhtml_function_coverage=1 00:05:23.628 --rc genhtml_legend=1 00:05:23.628 --rc geninfo_all_blocks=1 00:05:23.628 --rc geninfo_unexecuted_blocks=1 00:05:23.628 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:23.628 ' 00:05:23.628 17:14:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:23.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.628 --rc genhtml_branch_coverage=1 00:05:23.628 --rc genhtml_function_coverage=1 00:05:23.628 --rc genhtml_legend=1 00:05:23.628 --rc geninfo_all_blocks=1 00:05:23.628 --rc geninfo_unexecuted_blocks=1 00:05:23.628 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:23.628 ' 00:05:23.628 17:14:43 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:23.628 17:14:43 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=343112 00:05:23.628 17:14:43 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 343112 00:05:23.628 17:14:43 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.628 17:14:43 -- common/autotest_common.sh@829 -- # '[' -z 343112 ']' 00:05:23.628 17:14:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.628 17:14:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.628 17:14:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.628 17:14:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.628 17:14:43 -- common/autotest_common.sh@10 -- # set +x 00:05:23.628 [2024-11-09 17:14:43.201719] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:23.628 [2024-11-09 17:14:43.201787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid343112 ] 00:05:23.628 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.628 [2024-11-09 17:14:43.268388] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.628 [2024-11-09 17:14:43.338052] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:23.628 [2024-11-09 17:14:43.338167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.566 17:14:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.566 17:14:44 -- common/autotest_common.sh@862 -- # return 0 00:05:24.566 17:14:44 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:24.566 17:14:44 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 343112 00:05:24.566 17:14:44 -- common/autotest_common.sh@936 -- # '[' -z 343112 ']' 00:05:24.566 17:14:44 -- common/autotest_common.sh@940 -- # kill -0 343112 00:05:24.566 17:14:44 -- common/autotest_common.sh@941 -- # uname 00:05:24.566 17:14:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:24.566 17:14:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 343112 00:05:24.566 17:14:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:24.566 17:14:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:24.566 17:14:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 343112' 00:05:24.566 killing process with pid 343112 00:05:24.566 17:14:44 -- common/autotest_common.sh@955 -- # kill 343112 00:05:24.566 17:14:44 -- common/autotest_common.sh@960 -- # wait 343112 00:05:25.135 00:05:25.135 real 0m1.615s 00:05:25.135 user 0m1.698s 00:05:25.135 sys 0m0.503s 00:05:25.135 17:14:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:25.135 17:14:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.135 ************************************ 00:05:25.135 END TEST alias_rpc 00:05:25.135 ************************************ 00:05:25.135 17:14:44 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:05:25.135 17:14:44 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:25.135 17:14:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:25.135 17:14:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.135 17:14:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.135 ************************************ 00:05:25.135 START TEST spdkcli_tcp 00:05:25.135 ************************************ 00:05:25.135 17:14:44 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:25.135 * Looking for test storage... 00:05:25.135 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:05:25.135 17:14:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:25.135 17:14:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:25.135 17:14:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:25.135 17:14:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:25.135 17:14:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:25.135 17:14:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:25.135 17:14:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:25.135 17:14:44 -- scripts/common.sh@335 -- # IFS=.-: 00:05:25.135 17:14:44 -- scripts/common.sh@335 -- # read -ra ver1 00:05:25.135 17:14:44 -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.135 17:14:44 -- scripts/common.sh@336 -- # read -ra ver2 00:05:25.135 17:14:44 -- scripts/common.sh@337 -- # local 'op=<' 00:05:25.135 17:14:44 -- scripts/common.sh@339 -- # ver1_l=2 00:05:25.135 17:14:44 -- scripts/common.sh@340 -- # ver2_l=1 00:05:25.135 17:14:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:25.135 17:14:44 -- scripts/common.sh@343 -- # case "$op" in 00:05:25.135 17:14:44 -- scripts/common.sh@344 -- # : 1 00:05:25.135 17:14:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:25.135 17:14:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.135 17:14:44 -- scripts/common.sh@364 -- # decimal 1 00:05:25.135 17:14:44 -- scripts/common.sh@352 -- # local d=1 00:05:25.135 17:14:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.135 17:14:44 -- scripts/common.sh@354 -- # echo 1 00:05:25.135 17:14:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:25.135 17:14:44 -- scripts/common.sh@365 -- # decimal 2 00:05:25.135 17:14:44 -- scripts/common.sh@352 -- # local d=2 00:05:25.135 17:14:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.135 17:14:44 -- scripts/common.sh@354 -- # echo 2 00:05:25.135 17:14:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:25.135 17:14:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:25.135 17:14:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:25.135 17:14:44 -- scripts/common.sh@367 -- # return 0 00:05:25.135 17:14:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.135 17:14:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:25.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.135 --rc genhtml_branch_coverage=1 00:05:25.135 --rc genhtml_function_coverage=1 00:05:25.135 --rc genhtml_legend=1 00:05:25.135 --rc geninfo_all_blocks=1 00:05:25.135 --rc geninfo_unexecuted_blocks=1 00:05:25.135 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:25.135 ' 00:05:25.135 17:14:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:25.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.135 --rc genhtml_branch_coverage=1 00:05:25.135 --rc genhtml_function_coverage=1 00:05:25.135 --rc genhtml_legend=1 00:05:25.135 --rc geninfo_all_blocks=1 00:05:25.135 --rc geninfo_unexecuted_blocks=1 00:05:25.135 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:25.135 ' 00:05:25.135 17:14:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:25.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.135 --rc genhtml_branch_coverage=1 00:05:25.136 --rc genhtml_function_coverage=1 00:05:25.136 --rc genhtml_legend=1 00:05:25.136 --rc geninfo_all_blocks=1 00:05:25.136 --rc geninfo_unexecuted_blocks=1 00:05:25.136 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:25.136 ' 00:05:25.136 17:14:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:25.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.136 --rc genhtml_branch_coverage=1 00:05:25.136 --rc genhtml_function_coverage=1 00:05:25.136 --rc genhtml_legend=1 00:05:25.136 --rc geninfo_all_blocks=1 00:05:25.136 --rc geninfo_unexecuted_blocks=1 00:05:25.136 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:25.136 ' 00:05:25.136 17:14:44 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:05:25.136 17:14:44 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:25.136 17:14:44 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:05:25.136 17:14:44 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:25.136 17:14:44 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:25.136 17:14:44 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:25.136 17:14:44 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:25.136 17:14:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:25.136 17:14:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.136 17:14:44 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=343449 00:05:25.136 17:14:44 -- spdkcli/tcp.sh@27 -- # waitforlisten 343449 00:05:25.136 17:14:44 -- common/autotest_common.sh@829 -- # '[' -z 343449 ']' 00:05:25.136 17:14:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.136 17:14:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.136 17:14:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.136 17:14:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.136 17:14:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.136 17:14:44 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:25.136 [2024-11-09 17:14:44.852835] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:25.136 [2024-11-09 17:14:44.852924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid343449 ] 00:05:25.136 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.395 [2024-11-09 17:14:44.920540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.395 [2024-11-09 17:14:44.995756] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:25.395 [2024-11-09 17:14:44.995888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.395 [2024-11-09 17:14:44.995891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.964 17:14:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.964 17:14:45 -- common/autotest_common.sh@862 -- # return 0 00:05:25.964 17:14:45 -- spdkcli/tcp.sh@31 -- # socat_pid=343715 00:05:25.964 17:14:45 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:25.964 17:14:45 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:26.224 [ 00:05:26.224 "spdk_get_version", 00:05:26.224 "rpc_get_methods", 00:05:26.224 "trace_get_info", 00:05:26.224 "trace_get_tpoint_group_mask", 00:05:26.224 "trace_disable_tpoint_group", 00:05:26.224 "trace_enable_tpoint_group", 00:05:26.224 "trace_clear_tpoint_mask", 00:05:26.224 "trace_set_tpoint_mask", 00:05:26.224 "vfu_tgt_set_base_path", 00:05:26.224 "framework_get_pci_devices", 00:05:26.224 "framework_get_config", 00:05:26.224 "framework_get_subsystems", 00:05:26.224 "iobuf_get_stats", 00:05:26.224 "iobuf_set_options", 00:05:26.224 "sock_set_default_impl", 00:05:26.224 "sock_impl_set_options", 00:05:26.224 "sock_impl_get_options", 00:05:26.224 "vmd_rescan", 00:05:26.224 "vmd_remove_device", 00:05:26.224 "vmd_enable", 00:05:26.224 "accel_get_stats", 00:05:26.224 "accel_set_options", 00:05:26.224 "accel_set_driver", 00:05:26.224 "accel_crypto_key_destroy", 00:05:26.224 "accel_crypto_keys_get", 00:05:26.224 "accel_crypto_key_create", 00:05:26.224 "accel_assign_opc", 00:05:26.224 "accel_get_module_info", 00:05:26.224 "accel_get_opc_assignments", 00:05:26.224 "notify_get_notifications", 00:05:26.224 "notify_get_types", 00:05:26.224 "bdev_get_histogram", 00:05:26.224 "bdev_enable_histogram", 00:05:26.224 "bdev_set_qos_limit", 00:05:26.224 "bdev_set_qd_sampling_period", 00:05:26.224 "bdev_get_bdevs", 00:05:26.224 "bdev_reset_iostat", 00:05:26.224 "bdev_get_iostat", 00:05:26.224 "bdev_examine", 00:05:26.224 "bdev_wait_for_examine", 00:05:26.224 "bdev_set_options", 00:05:26.224 "scsi_get_devices", 00:05:26.224 "thread_set_cpumask", 00:05:26.224 "framework_get_scheduler", 00:05:26.224 "framework_set_scheduler", 00:05:26.224 "framework_get_reactors", 00:05:26.224 "thread_get_io_channels", 00:05:26.224 "thread_get_pollers", 00:05:26.224 "thread_get_stats", 00:05:26.224 "framework_monitor_context_switch", 00:05:26.224 "spdk_kill_instance", 00:05:26.224 "log_enable_timestamps", 00:05:26.224 "log_get_flags", 00:05:26.224 "log_clear_flag", 00:05:26.224 "log_set_flag", 00:05:26.224 "log_get_level", 00:05:26.224 "log_set_level", 00:05:26.224 "log_get_print_level", 00:05:26.224 "log_set_print_level", 00:05:26.224 "framework_enable_cpumask_locks", 00:05:26.224 "framework_disable_cpumask_locks", 00:05:26.224 "framework_wait_init", 00:05:26.224 "framework_start_init", 00:05:26.224 "virtio_blk_create_transport", 00:05:26.224 "virtio_blk_get_transports", 00:05:26.224 "vhost_controller_set_coalescing", 00:05:26.224 "vhost_get_controllers", 00:05:26.224 "vhost_delete_controller", 00:05:26.224 "vhost_create_blk_controller", 00:05:26.224 "vhost_scsi_controller_remove_target", 00:05:26.224 "vhost_scsi_controller_add_target", 00:05:26.224 "vhost_start_scsi_controller", 00:05:26.224 "vhost_create_scsi_controller", 00:05:26.224 "ublk_recover_disk", 00:05:26.224 "ublk_get_disks", 00:05:26.224 "ublk_stop_disk", 00:05:26.224 "ublk_start_disk", 00:05:26.224 "ublk_destroy_target", 00:05:26.224 "ublk_create_target", 00:05:26.224 "nbd_get_disks", 00:05:26.224 "nbd_stop_disk", 00:05:26.224 "nbd_start_disk", 00:05:26.224 "env_dpdk_get_mem_stats", 00:05:26.224 "nvmf_subsystem_get_listeners", 00:05:26.224 "nvmf_subsystem_get_qpairs", 00:05:26.224 "nvmf_subsystem_get_controllers", 00:05:26.224 "nvmf_get_stats", 00:05:26.224 "nvmf_get_transports", 00:05:26.224 "nvmf_create_transport", 00:05:26.224 "nvmf_get_targets", 00:05:26.224 "nvmf_delete_target", 00:05:26.224 "nvmf_create_target", 00:05:26.224 "nvmf_subsystem_allow_any_host", 00:05:26.224 "nvmf_subsystem_remove_host", 00:05:26.224 "nvmf_subsystem_add_host", 00:05:26.224 "nvmf_subsystem_remove_ns", 00:05:26.224 "nvmf_subsystem_add_ns", 00:05:26.224 "nvmf_subsystem_listener_set_ana_state", 00:05:26.224 "nvmf_discovery_get_referrals", 00:05:26.224 "nvmf_discovery_remove_referral", 00:05:26.224 "nvmf_discovery_add_referral", 00:05:26.225 "nvmf_subsystem_remove_listener", 00:05:26.225 "nvmf_subsystem_add_listener", 00:05:26.225 "nvmf_delete_subsystem", 00:05:26.225 "nvmf_create_subsystem", 00:05:26.225 "nvmf_get_subsystems", 00:05:26.225 "nvmf_set_crdt", 00:05:26.225 "nvmf_set_config", 00:05:26.225 "nvmf_set_max_subsystems", 00:05:26.225 "iscsi_set_options", 00:05:26.225 "iscsi_get_auth_groups", 00:05:26.225 "iscsi_auth_group_remove_secret", 00:05:26.225 "iscsi_auth_group_add_secret", 00:05:26.225 "iscsi_delete_auth_group", 00:05:26.225 "iscsi_create_auth_group", 00:05:26.225 "iscsi_set_discovery_auth", 00:05:26.225 "iscsi_get_options", 00:05:26.225 "iscsi_target_node_request_logout", 00:05:26.225 "iscsi_target_node_set_redirect", 00:05:26.225 "iscsi_target_node_set_auth", 00:05:26.225 "iscsi_target_node_add_lun", 00:05:26.225 "iscsi_get_connections", 00:05:26.225 "iscsi_portal_group_set_auth", 00:05:26.225 "iscsi_start_portal_group", 00:05:26.225 "iscsi_delete_portal_group", 00:05:26.225 "iscsi_create_portal_group", 00:05:26.225 "iscsi_get_portal_groups", 00:05:26.225 "iscsi_delete_target_node", 00:05:26.225 "iscsi_target_node_remove_pg_ig_maps", 00:05:26.225 "iscsi_target_node_add_pg_ig_maps", 00:05:26.225 "iscsi_create_target_node", 00:05:26.225 "iscsi_get_target_nodes", 00:05:26.225 "iscsi_delete_initiator_group", 00:05:26.225 "iscsi_initiator_group_remove_initiators", 00:05:26.225 "iscsi_initiator_group_add_initiators", 00:05:26.225 "iscsi_create_initiator_group", 00:05:26.225 "iscsi_get_initiator_groups", 00:05:26.225 "vfu_virtio_create_scsi_endpoint", 00:05:26.225 "vfu_virtio_scsi_remove_target", 00:05:26.225 "vfu_virtio_scsi_add_target", 00:05:26.225 "vfu_virtio_create_blk_endpoint", 00:05:26.225 "vfu_virtio_delete_endpoint", 00:05:26.225 "iaa_scan_accel_module", 00:05:26.225 "dsa_scan_accel_module", 00:05:26.225 "ioat_scan_accel_module", 00:05:26.225 "accel_error_inject_error", 00:05:26.225 "bdev_iscsi_delete", 00:05:26.225 "bdev_iscsi_create", 00:05:26.225 "bdev_iscsi_set_options", 00:05:26.225 "bdev_virtio_attach_controller", 00:05:26.225 "bdev_virtio_scsi_get_devices", 00:05:26.225 "bdev_virtio_detach_controller", 00:05:26.225 "bdev_virtio_blk_set_hotplug", 00:05:26.225 "bdev_ftl_set_property", 00:05:26.225 "bdev_ftl_get_properties", 00:05:26.225 "bdev_ftl_get_stats", 00:05:26.225 "bdev_ftl_unmap", 00:05:26.225 "bdev_ftl_unload", 00:05:26.225 "bdev_ftl_delete", 00:05:26.225 "bdev_ftl_load", 00:05:26.225 "bdev_ftl_create", 00:05:26.225 "bdev_aio_delete", 00:05:26.225 "bdev_aio_rescan", 00:05:26.225 "bdev_aio_create", 00:05:26.225 "blobfs_create", 00:05:26.225 "blobfs_detect", 00:05:26.225 "blobfs_set_cache_size", 00:05:26.225 "bdev_zone_block_delete", 00:05:26.225 "bdev_zone_block_create", 00:05:26.225 "bdev_delay_delete", 00:05:26.225 "bdev_delay_create", 00:05:26.225 "bdev_delay_update_latency", 00:05:26.225 "bdev_split_delete", 00:05:26.225 "bdev_split_create", 00:05:26.225 "bdev_error_inject_error", 00:05:26.225 "bdev_error_delete", 00:05:26.225 "bdev_error_create", 00:05:26.225 "bdev_raid_set_options", 00:05:26.225 "bdev_raid_remove_base_bdev", 00:05:26.225 "bdev_raid_add_base_bdev", 00:05:26.225 "bdev_raid_delete", 00:05:26.225 "bdev_raid_create", 00:05:26.225 "bdev_raid_get_bdevs", 00:05:26.225 "bdev_lvol_grow_lvstore", 00:05:26.225 "bdev_lvol_get_lvols", 00:05:26.225 "bdev_lvol_get_lvstores", 00:05:26.225 "bdev_lvol_delete", 00:05:26.225 "bdev_lvol_set_read_only", 00:05:26.225 "bdev_lvol_resize", 00:05:26.225 "bdev_lvol_decouple_parent", 00:05:26.225 "bdev_lvol_inflate", 00:05:26.225 "bdev_lvol_rename", 00:05:26.225 "bdev_lvol_clone_bdev", 00:05:26.225 "bdev_lvol_clone", 00:05:26.225 "bdev_lvol_snapshot", 00:05:26.225 "bdev_lvol_create", 00:05:26.225 "bdev_lvol_delete_lvstore", 00:05:26.225 "bdev_lvol_rename_lvstore", 00:05:26.225 "bdev_lvol_create_lvstore", 00:05:26.225 "bdev_passthru_delete", 00:05:26.225 "bdev_passthru_create", 00:05:26.225 "bdev_nvme_cuse_unregister", 00:05:26.225 "bdev_nvme_cuse_register", 00:05:26.225 "bdev_opal_new_user", 00:05:26.225 "bdev_opal_set_lock_state", 00:05:26.225 "bdev_opal_delete", 00:05:26.225 "bdev_opal_get_info", 00:05:26.225 "bdev_opal_create", 00:05:26.225 "bdev_nvme_opal_revert", 00:05:26.225 "bdev_nvme_opal_init", 00:05:26.225 "bdev_nvme_send_cmd", 00:05:26.225 "bdev_nvme_get_path_iostat", 00:05:26.225 "bdev_nvme_get_mdns_discovery_info", 00:05:26.225 "bdev_nvme_stop_mdns_discovery", 00:05:26.225 "bdev_nvme_start_mdns_discovery", 00:05:26.225 "bdev_nvme_set_multipath_policy", 00:05:26.225 "bdev_nvme_set_preferred_path", 00:05:26.225 "bdev_nvme_get_io_paths", 00:05:26.225 "bdev_nvme_remove_error_injection", 00:05:26.225 "bdev_nvme_add_error_injection", 00:05:26.225 "bdev_nvme_get_discovery_info", 00:05:26.225 "bdev_nvme_stop_discovery", 00:05:26.225 "bdev_nvme_start_discovery", 00:05:26.225 "bdev_nvme_get_controller_health_info", 00:05:26.225 "bdev_nvme_disable_controller", 00:05:26.225 "bdev_nvme_enable_controller", 00:05:26.225 "bdev_nvme_reset_controller", 00:05:26.225 "bdev_nvme_get_transport_statistics", 00:05:26.225 "bdev_nvme_apply_firmware", 00:05:26.225 "bdev_nvme_detach_controller", 00:05:26.225 "bdev_nvme_get_controllers", 00:05:26.225 "bdev_nvme_attach_controller", 00:05:26.225 "bdev_nvme_set_hotplug", 00:05:26.225 "bdev_nvme_set_options", 00:05:26.225 "bdev_null_resize", 00:05:26.225 "bdev_null_delete", 00:05:26.225 "bdev_null_create", 00:05:26.225 "bdev_malloc_delete", 00:05:26.225 "bdev_malloc_create" 00:05:26.225 ] 00:05:26.225 17:14:45 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:26.225 17:14:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:26.225 17:14:45 -- common/autotest_common.sh@10 -- # set +x 00:05:26.225 17:14:45 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:26.225 17:14:45 -- spdkcli/tcp.sh@38 -- # killprocess 343449 00:05:26.225 17:14:45 -- common/autotest_common.sh@936 -- # '[' -z 343449 ']' 00:05:26.225 17:14:45 -- common/autotest_common.sh@940 -- # kill -0 343449 00:05:26.225 17:14:45 -- common/autotest_common.sh@941 -- # uname 00:05:26.225 17:14:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:26.225 17:14:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 343449 00:05:26.225 17:14:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:26.225 17:14:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:26.225 17:14:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 343449' 00:05:26.225 killing process with pid 343449 00:05:26.225 17:14:45 -- common/autotest_common.sh@955 -- # kill 343449 00:05:26.225 17:14:45 -- common/autotest_common.sh@960 -- # wait 343449 00:05:26.484 00:05:26.484 real 0m1.587s 00:05:26.485 user 0m2.854s 00:05:26.485 sys 0m0.500s 00:05:26.485 17:14:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:26.485 17:14:46 -- common/autotest_common.sh@10 -- # set +x 00:05:26.485 ************************************ 00:05:26.485 END TEST spdkcli_tcp 00:05:26.485 ************************************ 00:05:26.744 17:14:46 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:26.744 17:14:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:26.744 17:14:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.744 17:14:46 -- common/autotest_common.sh@10 -- # set +x 00:05:26.744 ************************************ 00:05:26.744 START TEST dpdk_mem_utility 00:05:26.744 ************************************ 00:05:26.744 17:14:46 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:26.744 * Looking for test storage... 00:05:26.744 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:05:26.744 17:14:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:26.744 17:14:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:26.745 17:14:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:26.745 17:14:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:26.745 17:14:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:26.745 17:14:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:26.745 17:14:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:26.745 17:14:46 -- scripts/common.sh@335 -- # IFS=.-: 00:05:26.745 17:14:46 -- scripts/common.sh@335 -- # read -ra ver1 00:05:26.745 17:14:46 -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.745 17:14:46 -- scripts/common.sh@336 -- # read -ra ver2 00:05:26.745 17:14:46 -- scripts/common.sh@337 -- # local 'op=<' 00:05:26.745 17:14:46 -- scripts/common.sh@339 -- # ver1_l=2 00:05:26.745 17:14:46 -- scripts/common.sh@340 -- # ver2_l=1 00:05:26.745 17:14:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:26.745 17:14:46 -- scripts/common.sh@343 -- # case "$op" in 00:05:26.745 17:14:46 -- scripts/common.sh@344 -- # : 1 00:05:26.745 17:14:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:26.745 17:14:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.745 17:14:46 -- scripts/common.sh@364 -- # decimal 1 00:05:26.745 17:14:46 -- scripts/common.sh@352 -- # local d=1 00:05:26.745 17:14:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.745 17:14:46 -- scripts/common.sh@354 -- # echo 1 00:05:26.745 17:14:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:26.745 17:14:46 -- scripts/common.sh@365 -- # decimal 2 00:05:26.745 17:14:46 -- scripts/common.sh@352 -- # local d=2 00:05:26.745 17:14:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.745 17:14:46 -- scripts/common.sh@354 -- # echo 2 00:05:26.745 17:14:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:26.745 17:14:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:26.745 17:14:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:26.745 17:14:46 -- scripts/common.sh@367 -- # return 0 00:05:26.745 17:14:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.745 17:14:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:26.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.745 --rc genhtml_branch_coverage=1 00:05:26.745 --rc genhtml_function_coverage=1 00:05:26.745 --rc genhtml_legend=1 00:05:26.745 --rc geninfo_all_blocks=1 00:05:26.745 --rc geninfo_unexecuted_blocks=1 00:05:26.745 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:26.745 ' 00:05:26.745 17:14:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:26.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.745 --rc genhtml_branch_coverage=1 00:05:26.745 --rc genhtml_function_coverage=1 00:05:26.745 --rc genhtml_legend=1 00:05:26.745 --rc geninfo_all_blocks=1 00:05:26.745 --rc geninfo_unexecuted_blocks=1 00:05:26.745 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:26.745 ' 00:05:26.745 17:14:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:26.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.745 --rc genhtml_branch_coverage=1 00:05:26.745 --rc genhtml_function_coverage=1 00:05:26.745 --rc genhtml_legend=1 00:05:26.745 --rc geninfo_all_blocks=1 00:05:26.745 --rc geninfo_unexecuted_blocks=1 00:05:26.745 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:26.745 ' 00:05:26.745 17:14:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:26.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.745 --rc genhtml_branch_coverage=1 00:05:26.745 --rc genhtml_function_coverage=1 00:05:26.745 --rc genhtml_legend=1 00:05:26.745 --rc geninfo_all_blocks=1 00:05:26.745 --rc geninfo_unexecuted_blocks=1 00:05:26.745 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:26.745 ' 00:05:26.745 17:14:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:26.745 17:14:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=343793 00:05:26.745 17:14:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 343793 00:05:26.745 17:14:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.745 17:14:46 -- common/autotest_common.sh@829 -- # '[' -z 343793 ']' 00:05:26.745 17:14:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.745 17:14:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.745 17:14:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.745 17:14:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.745 17:14:46 -- common/autotest_common.sh@10 -- # set +x 00:05:26.745 [2024-11-09 17:14:46.483369] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:26.745 [2024-11-09 17:14:46.483438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid343793 ] 00:05:27.005 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.005 [2024-11-09 17:14:46.549659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.005 [2024-11-09 17:14:46.619967] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:27.005 [2024-11-09 17:14:46.620076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.572 17:14:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.572 17:14:47 -- common/autotest_common.sh@862 -- # return 0 00:05:27.572 17:14:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:27.572 17:14:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:27.572 17:14:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.572 17:14:47 -- common/autotest_common.sh@10 -- # set +x 00:05:27.572 { 00:05:27.572 "filename": "/tmp/spdk_mem_dump.txt" 00:05:27.572 } 00:05:27.572 17:14:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.572 17:14:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:27.832 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:27.832 1 heaps totaling size 814.000000 MiB 00:05:27.832 size: 814.000000 MiB heap id: 0 00:05:27.832 end heaps---------- 00:05:27.832 8 mempools totaling size 598.116089 MiB 00:05:27.832 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:27.832 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:27.832 size: 84.521057 MiB name: bdev_io_343793 00:05:27.832 size: 51.011292 MiB name: evtpool_343793 00:05:27.832 size: 50.003479 MiB name: msgpool_343793 00:05:27.832 size: 21.763794 MiB name: PDU_Pool 00:05:27.832 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:27.832 size: 0.026123 MiB name: Session_Pool 00:05:27.832 end mempools------- 00:05:27.832 6 memzones totaling size 4.142822 MiB 00:05:27.832 size: 1.000366 MiB name: RG_ring_0_343793 00:05:27.832 size: 1.000366 MiB name: RG_ring_1_343793 00:05:27.832 size: 1.000366 MiB name: RG_ring_4_343793 00:05:27.832 size: 1.000366 MiB name: RG_ring_5_343793 00:05:27.832 size: 0.125366 MiB name: RG_ring_2_343793 00:05:27.832 size: 0.015991 MiB name: RG_ring_3_343793 00:05:27.832 end memzones------- 00:05:27.832 17:14:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:27.832 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:27.832 list of free elements. size: 12.519348 MiB 00:05:27.832 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:27.832 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:27.832 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:27.832 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:27.832 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:27.832 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:27.832 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:27.832 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:27.832 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:27.832 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:27.832 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:27.832 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:27.832 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:27.832 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:27.832 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:27.832 list of standard malloc elements. size: 199.218079 MiB 00:05:27.832 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:27.832 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:27.832 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:27.832 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:27.832 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:27.832 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:27.832 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:27.832 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:27.832 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:27.832 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:27.832 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:27.832 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:27.832 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:27.832 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:27.832 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:27.832 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:27.833 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:27.833 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:27.833 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:27.833 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:27.833 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:27.833 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:27.833 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:27.833 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:27.833 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:27.833 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:27.833 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:27.833 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:27.833 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:27.833 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:27.833 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:27.833 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:27.833 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:27.833 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:27.833 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:27.833 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:27.833 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:27.833 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:27.833 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:27.833 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:27.833 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:27.833 list of memzone associated elements. size: 602.262573 MiB 00:05:27.833 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:27.833 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:27.833 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:27.833 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:27.833 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:27.833 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_343793_0 00:05:27.833 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:27.833 associated memzone info: size: 48.002930 MiB name: MP_evtpool_343793_0 00:05:27.833 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:27.833 associated memzone info: size: 48.002930 MiB name: MP_msgpool_343793_0 00:05:27.833 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:27.833 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:27.833 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:27.833 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:27.833 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:27.833 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_343793 00:05:27.833 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:27.833 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_343793 00:05:27.833 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:27.833 associated memzone info: size: 1.007996 MiB name: MP_evtpool_343793 00:05:27.833 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:27.833 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:27.833 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:27.833 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:27.833 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:27.833 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:27.833 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:27.833 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:27.833 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:27.833 associated memzone info: size: 1.000366 MiB name: RG_ring_0_343793 00:05:27.833 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:27.833 associated memzone info: size: 1.000366 MiB name: RG_ring_1_343793 00:05:27.833 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:27.833 associated memzone info: size: 1.000366 MiB name: RG_ring_4_343793 00:05:27.833 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:27.833 associated memzone info: size: 1.000366 MiB name: RG_ring_5_343793 00:05:27.833 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:27.833 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_343793 00:05:27.833 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:27.833 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:27.833 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:27.833 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:27.833 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:27.833 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:27.833 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:27.833 associated memzone info: size: 0.125366 MiB name: RG_ring_2_343793 00:05:27.833 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:27.833 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:27.833 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:27.833 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:27.833 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:27.833 associated memzone info: size: 0.015991 MiB name: RG_ring_3_343793 00:05:27.833 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:27.833 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:27.833 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:27.833 associated memzone info: size: 0.000183 MiB name: MP_msgpool_343793 00:05:27.833 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:27.833 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_343793 00:05:27.833 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:27.833 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:27.833 17:14:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:27.833 17:14:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 343793 00:05:27.833 17:14:47 -- common/autotest_common.sh@936 -- # '[' -z 343793 ']' 00:05:27.833 17:14:47 -- common/autotest_common.sh@940 -- # kill -0 343793 00:05:27.833 17:14:47 -- common/autotest_common.sh@941 -- # uname 00:05:27.833 17:14:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:27.833 17:14:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 343793 00:05:27.833 17:14:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:27.833 17:14:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:27.833 17:14:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 343793' 00:05:27.833 killing process with pid 343793 00:05:27.833 17:14:47 -- common/autotest_common.sh@955 -- # kill 343793 00:05:27.833 17:14:47 -- common/autotest_common.sh@960 -- # wait 343793 00:05:28.093 00:05:28.093 real 0m1.506s 00:05:28.093 user 0m1.554s 00:05:28.093 sys 0m0.452s 00:05:28.093 17:14:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:28.093 17:14:47 -- common/autotest_common.sh@10 -- # set +x 00:05:28.093 ************************************ 00:05:28.093 END TEST dpdk_mem_utility 00:05:28.093 ************************************ 00:05:28.093 17:14:47 -- spdk/autotest.sh@174 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:28.093 17:14:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:28.093 17:14:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.093 17:14:47 -- common/autotest_common.sh@10 -- # set +x 00:05:28.093 ************************************ 00:05:28.093 START TEST event 00:05:28.093 ************************************ 00:05:28.093 17:14:47 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:28.353 * Looking for test storage... 00:05:28.353 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:28.353 17:14:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:28.353 17:14:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:28.353 17:14:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:28.353 17:14:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:28.353 17:14:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:28.353 17:14:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:28.353 17:14:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:28.353 17:14:48 -- scripts/common.sh@335 -- # IFS=.-: 00:05:28.353 17:14:48 -- scripts/common.sh@335 -- # read -ra ver1 00:05:28.353 17:14:48 -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.353 17:14:48 -- scripts/common.sh@336 -- # read -ra ver2 00:05:28.353 17:14:48 -- scripts/common.sh@337 -- # local 'op=<' 00:05:28.353 17:14:48 -- scripts/common.sh@339 -- # ver1_l=2 00:05:28.353 17:14:48 -- scripts/common.sh@340 -- # ver2_l=1 00:05:28.353 17:14:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:28.353 17:14:48 -- scripts/common.sh@343 -- # case "$op" in 00:05:28.353 17:14:48 -- scripts/common.sh@344 -- # : 1 00:05:28.353 17:14:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:28.353 17:14:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.353 17:14:48 -- scripts/common.sh@364 -- # decimal 1 00:05:28.353 17:14:48 -- scripts/common.sh@352 -- # local d=1 00:05:28.353 17:14:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.353 17:14:48 -- scripts/common.sh@354 -- # echo 1 00:05:28.353 17:14:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:28.353 17:14:48 -- scripts/common.sh@365 -- # decimal 2 00:05:28.353 17:14:48 -- scripts/common.sh@352 -- # local d=2 00:05:28.353 17:14:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.353 17:14:48 -- scripts/common.sh@354 -- # echo 2 00:05:28.353 17:14:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:28.353 17:14:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:28.353 17:14:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:28.353 17:14:48 -- scripts/common.sh@367 -- # return 0 00:05:28.353 17:14:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.353 17:14:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:28.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.353 --rc genhtml_branch_coverage=1 00:05:28.353 --rc genhtml_function_coverage=1 00:05:28.353 --rc genhtml_legend=1 00:05:28.353 --rc geninfo_all_blocks=1 00:05:28.353 --rc geninfo_unexecuted_blocks=1 00:05:28.353 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:28.353 ' 00:05:28.353 17:14:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:28.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.353 --rc genhtml_branch_coverage=1 00:05:28.353 --rc genhtml_function_coverage=1 00:05:28.353 --rc genhtml_legend=1 00:05:28.353 --rc geninfo_all_blocks=1 00:05:28.353 --rc geninfo_unexecuted_blocks=1 00:05:28.353 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:28.353 ' 00:05:28.353 17:14:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:28.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.353 --rc genhtml_branch_coverage=1 00:05:28.353 --rc genhtml_function_coverage=1 00:05:28.353 --rc genhtml_legend=1 00:05:28.353 --rc geninfo_all_blocks=1 00:05:28.353 --rc geninfo_unexecuted_blocks=1 00:05:28.353 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:28.353 ' 00:05:28.353 17:14:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:28.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.353 --rc genhtml_branch_coverage=1 00:05:28.353 --rc genhtml_function_coverage=1 00:05:28.353 --rc genhtml_legend=1 00:05:28.353 --rc geninfo_all_blocks=1 00:05:28.353 --rc geninfo_unexecuted_blocks=1 00:05:28.353 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:28.353 ' 00:05:28.353 17:14:48 -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:28.353 17:14:48 -- bdev/nbd_common.sh@6 -- # set -e 00:05:28.353 17:14:48 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:28.353 17:14:48 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:28.353 17:14:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.353 17:14:48 -- common/autotest_common.sh@10 -- # set +x 00:05:28.353 ************************************ 00:05:28.353 START TEST event_perf 00:05:28.353 ************************************ 00:05:28.353 17:14:48 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:28.353 Running I/O for 1 seconds...[2024-11-09 17:14:48.048794] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:28.353 [2024-11-09 17:14:48.048902] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid344131 ] 00:05:28.353 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.353 [2024-11-09 17:14:48.115774] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:28.612 [2024-11-09 17:14:48.189163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.612 [2024-11-09 17:14:48.189260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.612 [2024-11-09 17:14:48.189324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.612 [2024-11-09 17:14:48.189326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.550 Running I/O for 1 seconds... 00:05:29.550 lcore 0: 197125 00:05:29.550 lcore 1: 197125 00:05:29.550 lcore 2: 197128 00:05:29.550 lcore 3: 197127 00:05:29.550 done. 00:05:29.550 00:05:29.550 real 0m1.222s 00:05:29.550 user 0m4.129s 00:05:29.550 sys 0m0.090s 00:05:29.550 17:14:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:29.550 17:14:49 -- common/autotest_common.sh@10 -- # set +x 00:05:29.550 ************************************ 00:05:29.550 END TEST event_perf 00:05:29.550 ************************************ 00:05:29.550 17:14:49 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:29.550 17:14:49 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:29.550 17:14:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.550 17:14:49 -- common/autotest_common.sh@10 -- # set +x 00:05:29.550 ************************************ 00:05:29.550 START TEST event_reactor 00:05:29.550 ************************************ 00:05:29.550 17:14:49 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:29.809 [2024-11-09 17:14:49.320991] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:29.809 [2024-11-09 17:14:49.321085] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid344416 ] 00:05:29.809 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.809 [2024-11-09 17:14:49.391668] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.809 [2024-11-09 17:14:49.456473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.186 test_start 00:05:31.186 oneshot 00:05:31.186 tick 100 00:05:31.186 tick 100 00:05:31.186 tick 250 00:05:31.186 tick 100 00:05:31.186 tick 100 00:05:31.186 tick 100 00:05:31.186 tick 250 00:05:31.186 tick 500 00:05:31.186 tick 100 00:05:31.186 tick 100 00:05:31.186 tick 250 00:05:31.186 tick 100 00:05:31.186 tick 100 00:05:31.186 test_end 00:05:31.186 00:05:31.186 real 0m1.222s 00:05:31.186 user 0m1.125s 00:05:31.186 sys 0m0.092s 00:05:31.186 17:14:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:31.186 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.186 ************************************ 00:05:31.186 END TEST event_reactor 00:05:31.186 ************************************ 00:05:31.186 17:14:50 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:31.186 17:14:50 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:31.186 17:14:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.186 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.186 ************************************ 00:05:31.186 START TEST event_reactor_perf 00:05:31.186 ************************************ 00:05:31.186 17:14:50 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:31.186 [2024-11-09 17:14:50.593302] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:31.186 [2024-11-09 17:14:50.593419] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid344698 ] 00:05:31.186 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.186 [2024-11-09 17:14:50.665006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.186 [2024-11-09 17:14:50.731971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.122 test_start 00:05:32.122 test_end 00:05:32.122 Performance: 978045 events per second 00:05:32.122 00:05:32.122 real 0m1.219s 00:05:32.122 user 0m1.134s 00:05:32.122 sys 0m0.080s 00:05:32.122 17:14:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.122 17:14:51 -- common/autotest_common.sh@10 -- # set +x 00:05:32.122 ************************************ 00:05:32.122 END TEST event_reactor_perf 00:05:32.122 ************************************ 00:05:32.122 17:14:51 -- event/event.sh@49 -- # uname -s 00:05:32.122 17:14:51 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:32.122 17:14:51 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:32.122 17:14:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.122 17:14:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.122 17:14:51 -- common/autotest_common.sh@10 -- # set +x 00:05:32.122 ************************************ 00:05:32.122 START TEST event_scheduler 00:05:32.122 ************************************ 00:05:32.122 17:14:51 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:32.382 * Looking for test storage... 00:05:32.382 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:05:32.382 17:14:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:32.382 17:14:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:32.382 17:14:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:32.382 17:14:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:32.382 17:14:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:32.382 17:14:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:32.382 17:14:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:32.382 17:14:52 -- scripts/common.sh@335 -- # IFS=.-: 00:05:32.382 17:14:52 -- scripts/common.sh@335 -- # read -ra ver1 00:05:32.382 17:14:52 -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.382 17:14:52 -- scripts/common.sh@336 -- # read -ra ver2 00:05:32.382 17:14:52 -- scripts/common.sh@337 -- # local 'op=<' 00:05:32.382 17:14:52 -- scripts/common.sh@339 -- # ver1_l=2 00:05:32.382 17:14:52 -- scripts/common.sh@340 -- # ver2_l=1 00:05:32.382 17:14:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:32.382 17:14:52 -- scripts/common.sh@343 -- # case "$op" in 00:05:32.382 17:14:52 -- scripts/common.sh@344 -- # : 1 00:05:32.382 17:14:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:32.382 17:14:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.382 17:14:52 -- scripts/common.sh@364 -- # decimal 1 00:05:32.382 17:14:52 -- scripts/common.sh@352 -- # local d=1 00:05:32.382 17:14:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.382 17:14:52 -- scripts/common.sh@354 -- # echo 1 00:05:32.382 17:14:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:32.382 17:14:52 -- scripts/common.sh@365 -- # decimal 2 00:05:32.382 17:14:52 -- scripts/common.sh@352 -- # local d=2 00:05:32.382 17:14:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.382 17:14:52 -- scripts/common.sh@354 -- # echo 2 00:05:32.382 17:14:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:32.382 17:14:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:32.382 17:14:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:32.382 17:14:52 -- scripts/common.sh@367 -- # return 0 00:05:32.382 17:14:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.382 17:14:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:32.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.382 --rc genhtml_branch_coverage=1 00:05:32.382 --rc genhtml_function_coverage=1 00:05:32.382 --rc genhtml_legend=1 00:05:32.382 --rc geninfo_all_blocks=1 00:05:32.382 --rc geninfo_unexecuted_blocks=1 00:05:32.382 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:32.382 ' 00:05:32.382 17:14:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:32.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.382 --rc genhtml_branch_coverage=1 00:05:32.382 --rc genhtml_function_coverage=1 00:05:32.382 --rc genhtml_legend=1 00:05:32.382 --rc geninfo_all_blocks=1 00:05:32.382 --rc geninfo_unexecuted_blocks=1 00:05:32.382 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:32.382 ' 00:05:32.382 17:14:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:32.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.382 --rc genhtml_branch_coverage=1 00:05:32.382 --rc genhtml_function_coverage=1 00:05:32.382 --rc genhtml_legend=1 00:05:32.382 --rc geninfo_all_blocks=1 00:05:32.382 --rc geninfo_unexecuted_blocks=1 00:05:32.382 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:32.382 ' 00:05:32.382 17:14:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:32.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.382 --rc genhtml_branch_coverage=1 00:05:32.382 --rc genhtml_function_coverage=1 00:05:32.382 --rc genhtml_legend=1 00:05:32.382 --rc geninfo_all_blocks=1 00:05:32.382 --rc geninfo_unexecuted_blocks=1 00:05:32.382 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:32.382 ' 00:05:32.382 17:14:52 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:32.382 17:14:52 -- scheduler/scheduler.sh@35 -- # scheduler_pid=345021 00:05:32.382 17:14:52 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.382 17:14:52 -- scheduler/scheduler.sh@37 -- # waitforlisten 345021 00:05:32.382 17:14:52 -- common/autotest_common.sh@829 -- # '[' -z 345021 ']' 00:05:32.382 17:14:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.382 17:14:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.382 17:14:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.382 17:14:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.382 17:14:52 -- common/autotest_common.sh@10 -- # set +x 00:05:32.382 17:14:52 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:32.382 [2024-11-09 17:14:52.057293] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:32.382 [2024-11-09 17:14:52.057383] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid345021 ] 00:05:32.382 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.382 [2024-11-09 17:14:52.121736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:32.641 [2024-11-09 17:14:52.199649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.641 [2024-11-09 17:14:52.199733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.641 [2024-11-09 17:14:52.199830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.641 [2024-11-09 17:14:52.199832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.283 17:14:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.283 17:14:52 -- common/autotest_common.sh@862 -- # return 0 00:05:33.283 17:14:52 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:33.283 17:14:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.283 17:14:52 -- common/autotest_common.sh@10 -- # set +x 00:05:33.283 POWER: Env isn't set yet! 00:05:33.283 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:33.283 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:33.283 POWER: Cannot set governor of lcore 0 to userspace 00:05:33.283 POWER: Attempting to initialise PSTAT power management... 00:05:33.283 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:33.283 POWER: Initialized successfully for lcore 0 power management 00:05:33.283 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:33.283 POWER: Initialized successfully for lcore 1 power management 00:05:33.283 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:33.283 POWER: Initialized successfully for lcore 2 power management 00:05:33.283 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:33.283 POWER: Initialized successfully for lcore 3 power management 00:05:33.283 [2024-11-09 17:14:52.950712] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:33.283 [2024-11-09 17:14:52.950727] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:33.283 [2024-11-09 17:14:52.950736] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:33.283 17:14:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.283 17:14:52 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:33.283 17:14:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.283 17:14:52 -- common/autotest_common.sh@10 -- # set +x 00:05:33.283 [2024-11-09 17:14:53.018760] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:33.283 17:14:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.283 17:14:53 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:33.283 17:14:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.283 17:14:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.283 17:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.543 ************************************ 00:05:33.543 START TEST scheduler_create_thread 00:05:33.543 ************************************ 00:05:33.543 17:14:53 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:33.543 17:14:53 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:33.543 17:14:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.543 17:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.543 2 00:05:33.543 17:14:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.543 17:14:53 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:33.543 17:14:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.543 17:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.543 3 00:05:33.543 17:14:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.543 17:14:53 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:33.543 17:14:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.543 17:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.543 4 00:05:33.543 17:14:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.543 17:14:53 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:33.543 17:14:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.543 17:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.543 5 00:05:33.543 17:14:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.543 17:14:53 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:33.543 17:14:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.543 17:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.543 6 00:05:33.543 17:14:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.543 17:14:53 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:33.543 17:14:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.543 17:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.543 7 00:05:33.543 17:14:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.543 17:14:53 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:33.543 17:14:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.543 17:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.543 8 00:05:33.543 17:14:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.543 17:14:53 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:33.543 17:14:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.543 17:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.543 9 00:05:33.543 17:14:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.543 17:14:53 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:33.543 17:14:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.543 17:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.543 10 00:05:33.543 17:14:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.543 17:14:53 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:33.543 17:14:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.543 17:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.543 17:14:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.543 17:14:53 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:33.543 17:14:53 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:33.543 17:14:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.543 17:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:34.482 17:14:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.482 17:14:54 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:34.482 17:14:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.482 17:14:54 -- common/autotest_common.sh@10 -- # set +x 00:05:35.861 17:14:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.861 17:14:55 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:35.861 17:14:55 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:35.861 17:14:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.861 17:14:55 -- common/autotest_common.sh@10 -- # set +x 00:05:36.797 17:14:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.797 00:05:36.797 real 0m3.382s 00:05:36.797 user 0m0.024s 00:05:36.797 sys 0m0.007s 00:05:36.797 17:14:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.797 17:14:56 -- common/autotest_common.sh@10 -- # set +x 00:05:36.797 ************************************ 00:05:36.797 END TEST scheduler_create_thread 00:05:36.797 ************************************ 00:05:36.797 17:14:56 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:36.798 17:14:56 -- scheduler/scheduler.sh@46 -- # killprocess 345021 00:05:36.798 17:14:56 -- common/autotest_common.sh@936 -- # '[' -z 345021 ']' 00:05:36.798 17:14:56 -- common/autotest_common.sh@940 -- # kill -0 345021 00:05:36.798 17:14:56 -- common/autotest_common.sh@941 -- # uname 00:05:36.798 17:14:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:36.798 17:14:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 345021 00:05:36.798 17:14:56 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:36.798 17:14:56 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:36.798 17:14:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 345021' 00:05:36.798 killing process with pid 345021 00:05:36.798 17:14:56 -- common/autotest_common.sh@955 -- # kill 345021 00:05:36.798 17:14:56 -- common/autotest_common.sh@960 -- # wait 345021 00:05:37.056 [2024-11-09 17:14:56.790586] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:37.317 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:37.317 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:37.317 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:37.317 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:37.317 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:37.317 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:37.317 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:37.317 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:37.317 00:05:37.317 real 0m5.162s 00:05:37.317 user 0m10.625s 00:05:37.317 sys 0m0.436s 00:05:37.317 17:14:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.317 17:14:57 -- common/autotest_common.sh@10 -- # set +x 00:05:37.317 ************************************ 00:05:37.317 END TEST event_scheduler 00:05:37.317 ************************************ 00:05:37.317 17:14:57 -- event/event.sh@51 -- # modprobe -n nbd 00:05:37.317 17:14:57 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:37.317 17:14:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.317 17:14:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.317 17:14:57 -- common/autotest_common.sh@10 -- # set +x 00:05:37.317 ************************************ 00:05:37.317 START TEST app_repeat 00:05:37.317 ************************************ 00:05:37.317 17:14:57 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:37.317 17:14:57 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.317 17:14:57 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.317 17:14:57 -- event/event.sh@13 -- # local nbd_list 00:05:37.317 17:14:57 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.317 17:14:57 -- event/event.sh@14 -- # local bdev_list 00:05:37.317 17:14:57 -- event/event.sh@15 -- # local repeat_times=4 00:05:37.317 17:14:57 -- event/event.sh@17 -- # modprobe nbd 00:05:37.317 17:14:57 -- event/event.sh@19 -- # repeat_pid=345895 00:05:37.317 17:14:57 -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:37.317 17:14:57 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.317 17:14:57 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 345895' 00:05:37.317 Process app_repeat pid: 345895 00:05:37.317 17:14:57 -- event/event.sh@23 -- # for i in {0..2} 00:05:37.317 17:14:57 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:37.317 spdk_app_start Round 0 00:05:37.317 17:14:57 -- event/event.sh@25 -- # waitforlisten 345895 /var/tmp/spdk-nbd.sock 00:05:37.317 17:14:57 -- common/autotest_common.sh@829 -- # '[' -z 345895 ']' 00:05:37.317 17:14:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:37.317 17:14:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.317 17:14:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:37.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:37.317 17:14:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.317 17:14:57 -- common/autotest_common.sh@10 -- # set +x 00:05:37.317 [2024-11-09 17:14:57.081991] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:37.317 [2024-11-09 17:14:57.082057] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid345895 ] 00:05:37.577 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.577 [2024-11-09 17:14:57.147836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.577 [2024-11-09 17:14:57.223958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.577 [2024-11-09 17:14:57.223961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.514 17:14:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.514 17:14:57 -- common/autotest_common.sh@862 -- # return 0 00:05:38.514 17:14:57 -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.514 Malloc0 00:05:38.514 17:14:58 -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.773 Malloc1 00:05:38.773 17:14:58 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.773 17:14:58 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.773 17:14:58 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.773 17:14:58 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:38.773 17:14:58 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.773 17:14:58 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:38.773 17:14:58 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.773 17:14:58 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.773 17:14:58 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.773 17:14:58 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:38.773 17:14:58 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.773 17:14:58 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:38.773 17:14:58 -- bdev/nbd_common.sh@12 -- # local i 00:05:38.773 17:14:58 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:38.773 17:14:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.773 17:14:58 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:38.773 /dev/nbd0 00:05:38.773 17:14:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:38.773 17:14:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:38.773 17:14:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:38.773 17:14:58 -- common/autotest_common.sh@867 -- # local i 00:05:38.773 17:14:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:38.773 17:14:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:38.773 17:14:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:38.773 17:14:58 -- common/autotest_common.sh@871 -- # break 00:05:38.773 17:14:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:38.773 17:14:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:38.773 17:14:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.773 1+0 records in 00:05:38.773 1+0 records out 00:05:38.773 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023629 s, 17.3 MB/s 00:05:38.773 17:14:58 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:38.773 17:14:58 -- common/autotest_common.sh@884 -- # size=4096 00:05:38.773 17:14:58 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:38.773 17:14:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:38.773 17:14:58 -- common/autotest_common.sh@887 -- # return 0 00:05:38.773 17:14:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.773 17:14:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.773 17:14:58 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:39.032 /dev/nbd1 00:05:39.032 17:14:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:39.032 17:14:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:39.032 17:14:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:39.032 17:14:58 -- common/autotest_common.sh@867 -- # local i 00:05:39.032 17:14:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:39.033 17:14:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:39.033 17:14:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:39.033 17:14:58 -- common/autotest_common.sh@871 -- # break 00:05:39.033 17:14:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:39.033 17:14:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:39.033 17:14:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.033 1+0 records in 00:05:39.033 1+0 records out 00:05:39.033 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239838 s, 17.1 MB/s 00:05:39.033 17:14:58 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:39.033 17:14:58 -- common/autotest_common.sh@884 -- # size=4096 00:05:39.033 17:14:58 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:39.033 17:14:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:39.033 17:14:58 -- common/autotest_common.sh@887 -- # return 0 00:05:39.033 17:14:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.033 17:14:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.033 17:14:58 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.033 17:14:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.033 17:14:58 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.292 17:14:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:39.292 { 00:05:39.292 "nbd_device": "/dev/nbd0", 00:05:39.292 "bdev_name": "Malloc0" 00:05:39.292 }, 00:05:39.292 { 00:05:39.292 "nbd_device": "/dev/nbd1", 00:05:39.292 "bdev_name": "Malloc1" 00:05:39.292 } 00:05:39.292 ]' 00:05:39.292 17:14:58 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:39.292 { 00:05:39.292 "nbd_device": "/dev/nbd0", 00:05:39.292 "bdev_name": "Malloc0" 00:05:39.292 }, 00:05:39.292 { 00:05:39.292 "nbd_device": "/dev/nbd1", 00:05:39.293 "bdev_name": "Malloc1" 00:05:39.293 } 00:05:39.293 ]' 00:05:39.293 17:14:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.293 17:14:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:39.293 /dev/nbd1' 00:05:39.293 17:14:58 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:39.293 /dev/nbd1' 00:05:39.293 17:14:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.293 17:14:58 -- bdev/nbd_common.sh@65 -- # count=2 00:05:39.293 17:14:58 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:39.293 17:14:58 -- bdev/nbd_common.sh@95 -- # count=2 00:05:39.293 17:14:58 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:39.293 17:14:58 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:39.293 17:14:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.293 17:14:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.293 17:14:58 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:39.293 17:14:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.293 17:14:58 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:39.293 17:14:58 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:39.293 256+0 records in 00:05:39.293 256+0 records out 00:05:39.293 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00617938 s, 170 MB/s 00:05:39.293 17:14:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.293 17:14:58 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:39.293 256+0 records in 00:05:39.293 256+0 records out 00:05:39.293 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197644 s, 53.1 MB/s 00:05:39.293 17:14:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.293 17:14:58 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:39.293 256+0 records in 00:05:39.293 256+0 records out 00:05:39.293 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212064 s, 49.4 MB/s 00:05:39.293 17:14:59 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:39.293 17:14:59 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.293 17:14:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.293 17:14:59 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:39.293 17:14:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.293 17:14:59 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:39.293 17:14:59 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:39.293 17:14:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.293 17:14:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:39.293 17:14:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.293 17:14:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:39.293 17:14:59 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.293 17:14:59 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:39.293 17:14:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.293 17:14:59 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.293 17:14:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:39.293 17:14:59 -- bdev/nbd_common.sh@51 -- # local i 00:05:39.293 17:14:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.293 17:14:59 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:39.552 17:14:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:39.552 17:14:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:39.552 17:14:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:39.552 17:14:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.552 17:14:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.552 17:14:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:39.552 17:14:59 -- bdev/nbd_common.sh@41 -- # break 00:05:39.552 17:14:59 -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.552 17:14:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.552 17:14:59 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:39.811 17:14:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:39.811 17:14:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:39.811 17:14:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:39.811 17:14:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.811 17:14:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.811 17:14:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:39.811 17:14:59 -- bdev/nbd_common.sh@41 -- # break 00:05:39.811 17:14:59 -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.811 17:14:59 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.811 17:14:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.811 17:14:59 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.069 17:14:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:40.069 17:14:59 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:40.069 17:14:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.069 17:14:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:40.069 17:14:59 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:40.069 17:14:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.069 17:14:59 -- bdev/nbd_common.sh@65 -- # true 00:05:40.069 17:14:59 -- bdev/nbd_common.sh@65 -- # count=0 00:05:40.069 17:14:59 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:40.069 17:14:59 -- bdev/nbd_common.sh@104 -- # count=0 00:05:40.069 17:14:59 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:40.069 17:14:59 -- bdev/nbd_common.sh@109 -- # return 0 00:05:40.069 17:14:59 -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:40.327 17:14:59 -- event/event.sh@35 -- # sleep 3 00:05:40.327 [2024-11-09 17:15:00.061693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.587 [2024-11-09 17:15:00.140365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.587 [2024-11-09 17:15:00.140367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.587 [2024-11-09 17:15:00.181208] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:40.587 [2024-11-09 17:15:00.181251] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:43.123 17:15:02 -- event/event.sh@23 -- # for i in {0..2} 00:05:43.123 17:15:02 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:43.123 spdk_app_start Round 1 00:05:43.123 17:15:02 -- event/event.sh@25 -- # waitforlisten 345895 /var/tmp/spdk-nbd.sock 00:05:43.123 17:15:02 -- common/autotest_common.sh@829 -- # '[' -z 345895 ']' 00:05:43.123 17:15:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.123 17:15:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.123 17:15:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.123 17:15:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.123 17:15:02 -- common/autotest_common.sh@10 -- # set +x 00:05:43.383 17:15:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.383 17:15:03 -- common/autotest_common.sh@862 -- # return 0 00:05:43.383 17:15:03 -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.642 Malloc0 00:05:43.642 17:15:03 -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.642 Malloc1 00:05:43.901 17:15:03 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.901 17:15:03 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.901 17:15:03 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.901 17:15:03 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:43.901 17:15:03 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.901 17:15:03 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:43.901 17:15:03 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.901 17:15:03 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.901 17:15:03 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.901 17:15:03 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:43.902 17:15:03 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.902 17:15:03 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:43.902 17:15:03 -- bdev/nbd_common.sh@12 -- # local i 00:05:43.902 17:15:03 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:43.902 17:15:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.902 17:15:03 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:43.902 /dev/nbd0 00:05:43.902 17:15:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:43.902 17:15:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:43.902 17:15:03 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:43.902 17:15:03 -- common/autotest_common.sh@867 -- # local i 00:05:43.902 17:15:03 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:43.902 17:15:03 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:43.902 17:15:03 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:43.902 17:15:03 -- common/autotest_common.sh@871 -- # break 00:05:43.902 17:15:03 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:43.902 17:15:03 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:43.902 17:15:03 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.902 1+0 records in 00:05:43.902 1+0 records out 00:05:43.902 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000136471 s, 30.0 MB/s 00:05:43.902 17:15:03 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:43.902 17:15:03 -- common/autotest_common.sh@884 -- # size=4096 00:05:43.902 17:15:03 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:43.902 17:15:03 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:43.902 17:15:03 -- common/autotest_common.sh@887 -- # return 0 00:05:43.902 17:15:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.902 17:15:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.902 17:15:03 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:44.161 /dev/nbd1 00:05:44.161 17:15:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:44.161 17:15:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:44.162 17:15:03 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:44.162 17:15:03 -- common/autotest_common.sh@867 -- # local i 00:05:44.162 17:15:03 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:44.162 17:15:03 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:44.162 17:15:03 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:44.162 17:15:03 -- common/autotest_common.sh@871 -- # break 00:05:44.162 17:15:03 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:44.162 17:15:03 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:44.162 17:15:03 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.162 1+0 records in 00:05:44.162 1+0 records out 00:05:44.162 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024866 s, 16.5 MB/s 00:05:44.162 17:15:03 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:44.162 17:15:03 -- common/autotest_common.sh@884 -- # size=4096 00:05:44.162 17:15:03 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:44.162 17:15:03 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:44.162 17:15:03 -- common/autotest_common.sh@887 -- # return 0 00:05:44.162 17:15:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.162 17:15:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.162 17:15:03 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.162 17:15:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.162 17:15:03 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:44.421 { 00:05:44.421 "nbd_device": "/dev/nbd0", 00:05:44.421 "bdev_name": "Malloc0" 00:05:44.421 }, 00:05:44.421 { 00:05:44.421 "nbd_device": "/dev/nbd1", 00:05:44.421 "bdev_name": "Malloc1" 00:05:44.421 } 00:05:44.421 ]' 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:44.421 { 00:05:44.421 "nbd_device": "/dev/nbd0", 00:05:44.421 "bdev_name": "Malloc0" 00:05:44.421 }, 00:05:44.421 { 00:05:44.421 "nbd_device": "/dev/nbd1", 00:05:44.421 "bdev_name": "Malloc1" 00:05:44.421 } 00:05:44.421 ]' 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:44.421 /dev/nbd1' 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:44.421 /dev/nbd1' 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@65 -- # count=2 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@95 -- # count=2 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:44.421 256+0 records in 00:05:44.421 256+0 records out 00:05:44.421 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110797 s, 94.6 MB/s 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:44.421 256+0 records in 00:05:44.421 256+0 records out 00:05:44.421 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019918 s, 52.6 MB/s 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:44.421 256+0 records in 00:05:44.421 256+0 records out 00:05:44.421 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211297 s, 49.6 MB/s 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@51 -- # local i 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.421 17:15:04 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:44.681 17:15:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:44.681 17:15:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:44.681 17:15:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:44.681 17:15:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.681 17:15:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.681 17:15:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:44.681 17:15:04 -- bdev/nbd_common.sh@41 -- # break 00:05:44.681 17:15:04 -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.681 17:15:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.681 17:15:04 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:44.940 17:15:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:44.940 17:15:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:44.940 17:15:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:44.940 17:15:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.940 17:15:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.940 17:15:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:44.940 17:15:04 -- bdev/nbd_common.sh@41 -- # break 00:05:44.940 17:15:04 -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.940 17:15:04 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.940 17:15:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.940 17:15:04 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.199 17:15:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:45.199 17:15:04 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:45.199 17:15:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.199 17:15:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:45.199 17:15:04 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:45.199 17:15:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.199 17:15:04 -- bdev/nbd_common.sh@65 -- # true 00:05:45.199 17:15:04 -- bdev/nbd_common.sh@65 -- # count=0 00:05:45.199 17:15:04 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:45.199 17:15:04 -- bdev/nbd_common.sh@104 -- # count=0 00:05:45.199 17:15:04 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:45.199 17:15:04 -- bdev/nbd_common.sh@109 -- # return 0 00:05:45.199 17:15:04 -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:45.458 17:15:05 -- event/event.sh@35 -- # sleep 3 00:05:45.458 [2024-11-09 17:15:05.201088] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.717 [2024-11-09 17:15:05.272095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.717 [2024-11-09 17:15:05.272097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.717 [2024-11-09 17:15:05.313223] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:45.717 [2024-11-09 17:15:05.313264] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:48.253 17:15:08 -- event/event.sh@23 -- # for i in {0..2} 00:05:48.253 17:15:08 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:48.253 spdk_app_start Round 2 00:05:48.253 17:15:08 -- event/event.sh@25 -- # waitforlisten 345895 /var/tmp/spdk-nbd.sock 00:05:48.253 17:15:08 -- common/autotest_common.sh@829 -- # '[' -z 345895 ']' 00:05:48.253 17:15:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.253 17:15:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.253 17:15:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.253 17:15:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.253 17:15:08 -- common/autotest_common.sh@10 -- # set +x 00:05:48.512 17:15:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.512 17:15:08 -- common/autotest_common.sh@862 -- # return 0 00:05:48.512 17:15:08 -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.771 Malloc0 00:05:48.771 17:15:08 -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.031 Malloc1 00:05:49.031 17:15:08 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.031 17:15:08 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.031 17:15:08 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.031 17:15:08 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:49.031 17:15:08 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.031 17:15:08 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:49.031 17:15:08 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.031 17:15:08 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.031 17:15:08 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.031 17:15:08 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:49.031 17:15:08 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.031 17:15:08 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:49.031 17:15:08 -- bdev/nbd_common.sh@12 -- # local i 00:05:49.031 17:15:08 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:49.031 17:15:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.031 17:15:08 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:49.031 /dev/nbd0 00:05:49.031 17:15:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:49.031 17:15:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:49.031 17:15:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:49.031 17:15:08 -- common/autotest_common.sh@867 -- # local i 00:05:49.031 17:15:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:49.031 17:15:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:49.031 17:15:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:49.031 17:15:08 -- common/autotest_common.sh@871 -- # break 00:05:49.031 17:15:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:49.031 17:15:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:49.031 17:15:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.031 1+0 records in 00:05:49.031 1+0 records out 00:05:49.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278231 s, 14.7 MB/s 00:05:49.031 17:15:08 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:49.031 17:15:08 -- common/autotest_common.sh@884 -- # size=4096 00:05:49.031 17:15:08 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:49.031 17:15:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:49.031 17:15:08 -- common/autotest_common.sh@887 -- # return 0 00:05:49.031 17:15:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.031 17:15:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.031 17:15:08 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:49.290 /dev/nbd1 00:05:49.290 17:15:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:49.290 17:15:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:49.290 17:15:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:49.290 17:15:08 -- common/autotest_common.sh@867 -- # local i 00:05:49.290 17:15:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:49.290 17:15:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:49.290 17:15:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:49.290 17:15:08 -- common/autotest_common.sh@871 -- # break 00:05:49.290 17:15:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:49.290 17:15:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:49.290 17:15:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.290 1+0 records in 00:05:49.290 1+0 records out 00:05:49.290 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279375 s, 14.7 MB/s 00:05:49.290 17:15:09 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:49.290 17:15:09 -- common/autotest_common.sh@884 -- # size=4096 00:05:49.290 17:15:09 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:49.290 17:15:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:49.290 17:15:09 -- common/autotest_common.sh@887 -- # return 0 00:05:49.290 17:15:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.290 17:15:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.290 17:15:09 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.290 17:15:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.290 17:15:09 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:49.550 { 00:05:49.550 "nbd_device": "/dev/nbd0", 00:05:49.550 "bdev_name": "Malloc0" 00:05:49.550 }, 00:05:49.550 { 00:05:49.550 "nbd_device": "/dev/nbd1", 00:05:49.550 "bdev_name": "Malloc1" 00:05:49.550 } 00:05:49.550 ]' 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:49.550 { 00:05:49.550 "nbd_device": "/dev/nbd0", 00:05:49.550 "bdev_name": "Malloc0" 00:05:49.550 }, 00:05:49.550 { 00:05:49.550 "nbd_device": "/dev/nbd1", 00:05:49.550 "bdev_name": "Malloc1" 00:05:49.550 } 00:05:49.550 ]' 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:49.550 /dev/nbd1' 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:49.550 /dev/nbd1' 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@65 -- # count=2 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@95 -- # count=2 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:49.550 256+0 records in 00:05:49.550 256+0 records out 00:05:49.550 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115542 s, 90.8 MB/s 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:49.550 256+0 records in 00:05:49.550 256+0 records out 00:05:49.550 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0194625 s, 53.9 MB/s 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:49.550 256+0 records in 00:05:49.550 256+0 records out 00:05:49.550 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214208 s, 49.0 MB/s 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.550 17:15:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:49.809 17:15:09 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.809 17:15:09 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:49.809 17:15:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.809 17:15:09 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.809 17:15:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:49.809 17:15:09 -- bdev/nbd_common.sh@51 -- # local i 00:05:49.809 17:15:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.809 17:15:09 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:49.809 17:15:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:49.809 17:15:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:49.809 17:15:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:49.809 17:15:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.809 17:15:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.809 17:15:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:49.809 17:15:09 -- bdev/nbd_common.sh@41 -- # break 00:05:49.809 17:15:09 -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.809 17:15:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.809 17:15:09 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:50.069 17:15:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:50.069 17:15:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:50.069 17:15:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:50.069 17:15:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.069 17:15:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.069 17:15:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:50.069 17:15:09 -- bdev/nbd_common.sh@41 -- # break 00:05:50.069 17:15:09 -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.069 17:15:09 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.069 17:15:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.069 17:15:09 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.328 17:15:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:50.328 17:15:09 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:50.328 17:15:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.328 17:15:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:50.328 17:15:09 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:50.328 17:15:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.328 17:15:09 -- bdev/nbd_common.sh@65 -- # true 00:05:50.328 17:15:09 -- bdev/nbd_common.sh@65 -- # count=0 00:05:50.328 17:15:09 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:50.328 17:15:09 -- bdev/nbd_common.sh@104 -- # count=0 00:05:50.328 17:15:09 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:50.328 17:15:09 -- bdev/nbd_common.sh@109 -- # return 0 00:05:50.328 17:15:09 -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:50.587 17:15:10 -- event/event.sh@35 -- # sleep 3 00:05:50.587 [2024-11-09 17:15:10.345840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.847 [2024-11-09 17:15:10.412202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.847 [2024-11-09 17:15:10.412204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.847 [2024-11-09 17:15:10.453491] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:50.847 [2024-11-09 17:15:10.453536] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:54.138 17:15:13 -- event/event.sh@38 -- # waitforlisten 345895 /var/tmp/spdk-nbd.sock 00:05:54.138 17:15:13 -- common/autotest_common.sh@829 -- # '[' -z 345895 ']' 00:05:54.138 17:15:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.138 17:15:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.138 17:15:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.138 17:15:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.138 17:15:13 -- common/autotest_common.sh@10 -- # set +x 00:05:54.138 17:15:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.138 17:15:13 -- common/autotest_common.sh@862 -- # return 0 00:05:54.138 17:15:13 -- event/event.sh@39 -- # killprocess 345895 00:05:54.138 17:15:13 -- common/autotest_common.sh@936 -- # '[' -z 345895 ']' 00:05:54.138 17:15:13 -- common/autotest_common.sh@940 -- # kill -0 345895 00:05:54.138 17:15:13 -- common/autotest_common.sh@941 -- # uname 00:05:54.138 17:15:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:54.138 17:15:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 345895 00:05:54.138 17:15:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:54.138 17:15:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:54.138 17:15:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 345895' 00:05:54.138 killing process with pid 345895 00:05:54.138 17:15:13 -- common/autotest_common.sh@955 -- # kill 345895 00:05:54.138 17:15:13 -- common/autotest_common.sh@960 -- # wait 345895 00:05:54.138 spdk_app_start is called in Round 0. 00:05:54.138 Shutdown signal received, stop current app iteration 00:05:54.138 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:54.138 spdk_app_start is called in Round 1. 00:05:54.138 Shutdown signal received, stop current app iteration 00:05:54.138 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:54.138 spdk_app_start is called in Round 2. 00:05:54.138 Shutdown signal received, stop current app iteration 00:05:54.138 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:54.138 spdk_app_start is called in Round 3. 00:05:54.138 Shutdown signal received, stop current app iteration 00:05:54.138 17:15:13 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:54.138 17:15:13 -- event/event.sh@42 -- # return 0 00:05:54.138 00:05:54.138 real 0m16.497s 00:05:54.138 user 0m35.170s 00:05:54.138 sys 0m3.102s 00:05:54.138 17:15:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:54.138 17:15:13 -- common/autotest_common.sh@10 -- # set +x 00:05:54.138 ************************************ 00:05:54.138 END TEST app_repeat 00:05:54.138 ************************************ 00:05:54.138 17:15:13 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:54.138 17:15:13 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:54.138 17:15:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.138 17:15:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.138 17:15:13 -- common/autotest_common.sh@10 -- # set +x 00:05:54.138 ************************************ 00:05:54.138 START TEST cpu_locks 00:05:54.138 ************************************ 00:05:54.138 17:15:13 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:54.138 * Looking for test storage... 00:05:54.138 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:54.138 17:15:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:54.138 17:15:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:54.138 17:15:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:54.138 17:15:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:54.138 17:15:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:54.138 17:15:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:54.138 17:15:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:54.138 17:15:13 -- scripts/common.sh@335 -- # IFS=.-: 00:05:54.138 17:15:13 -- scripts/common.sh@335 -- # read -ra ver1 00:05:54.138 17:15:13 -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.138 17:15:13 -- scripts/common.sh@336 -- # read -ra ver2 00:05:54.138 17:15:13 -- scripts/common.sh@337 -- # local 'op=<' 00:05:54.138 17:15:13 -- scripts/common.sh@339 -- # ver1_l=2 00:05:54.138 17:15:13 -- scripts/common.sh@340 -- # ver2_l=1 00:05:54.138 17:15:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:54.138 17:15:13 -- scripts/common.sh@343 -- # case "$op" in 00:05:54.138 17:15:13 -- scripts/common.sh@344 -- # : 1 00:05:54.138 17:15:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:54.138 17:15:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.138 17:15:13 -- scripts/common.sh@364 -- # decimal 1 00:05:54.138 17:15:13 -- scripts/common.sh@352 -- # local d=1 00:05:54.138 17:15:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.138 17:15:13 -- scripts/common.sh@354 -- # echo 1 00:05:54.138 17:15:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:54.138 17:15:13 -- scripts/common.sh@365 -- # decimal 2 00:05:54.138 17:15:13 -- scripts/common.sh@352 -- # local d=2 00:05:54.138 17:15:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.138 17:15:13 -- scripts/common.sh@354 -- # echo 2 00:05:54.138 17:15:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:54.138 17:15:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:54.139 17:15:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:54.139 17:15:13 -- scripts/common.sh@367 -- # return 0 00:05:54.139 17:15:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.139 17:15:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:54.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.139 --rc genhtml_branch_coverage=1 00:05:54.139 --rc genhtml_function_coverage=1 00:05:54.139 --rc genhtml_legend=1 00:05:54.139 --rc geninfo_all_blocks=1 00:05:54.139 --rc geninfo_unexecuted_blocks=1 00:05:54.139 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:54.139 ' 00:05:54.139 17:15:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:54.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.139 --rc genhtml_branch_coverage=1 00:05:54.139 --rc genhtml_function_coverage=1 00:05:54.139 --rc genhtml_legend=1 00:05:54.139 --rc geninfo_all_blocks=1 00:05:54.139 --rc geninfo_unexecuted_blocks=1 00:05:54.139 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:54.139 ' 00:05:54.139 17:15:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:54.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.139 --rc genhtml_branch_coverage=1 00:05:54.139 --rc genhtml_function_coverage=1 00:05:54.139 --rc genhtml_legend=1 00:05:54.139 --rc geninfo_all_blocks=1 00:05:54.139 --rc geninfo_unexecuted_blocks=1 00:05:54.139 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:54.139 ' 00:05:54.139 17:15:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:54.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.139 --rc genhtml_branch_coverage=1 00:05:54.139 --rc genhtml_function_coverage=1 00:05:54.139 --rc genhtml_legend=1 00:05:54.139 --rc geninfo_all_blocks=1 00:05:54.139 --rc geninfo_unexecuted_blocks=1 00:05:54.139 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:54.139 ' 00:05:54.139 17:15:13 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:54.139 17:15:13 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:54.139 17:15:13 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:54.139 17:15:13 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:54.139 17:15:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.139 17:15:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.139 17:15:13 -- common/autotest_common.sh@10 -- # set +x 00:05:54.139 ************************************ 00:05:54.139 START TEST default_locks 00:05:54.139 ************************************ 00:05:54.139 17:15:13 -- common/autotest_common.sh@1114 -- # default_locks 00:05:54.139 17:15:13 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=349649 00:05:54.139 17:15:13 -- event/cpu_locks.sh@47 -- # waitforlisten 349649 00:05:54.139 17:15:13 -- common/autotest_common.sh@829 -- # '[' -z 349649 ']' 00:05:54.139 17:15:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.139 17:15:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.139 17:15:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.139 17:15:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.139 17:15:13 -- common/autotest_common.sh@10 -- # set +x 00:05:54.139 17:15:13 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.139 [2024-11-09 17:15:13.823176] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:54.139 [2024-11-09 17:15:13.823244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid349649 ] 00:05:54.139 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.139 [2024-11-09 17:15:13.890085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.399 [2024-11-09 17:15:13.966836] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:54.399 [2024-11-09 17:15:13.966941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.967 17:15:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.967 17:15:14 -- common/autotest_common.sh@862 -- # return 0 00:05:54.967 17:15:14 -- event/cpu_locks.sh@49 -- # locks_exist 349649 00:05:54.967 17:15:14 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.967 17:15:14 -- event/cpu_locks.sh@22 -- # lslocks -p 349649 00:05:55.536 lslocks: write error 00:05:55.536 17:15:15 -- event/cpu_locks.sh@50 -- # killprocess 349649 00:05:55.536 17:15:15 -- common/autotest_common.sh@936 -- # '[' -z 349649 ']' 00:05:55.536 17:15:15 -- common/autotest_common.sh@940 -- # kill -0 349649 00:05:55.536 17:15:15 -- common/autotest_common.sh@941 -- # uname 00:05:55.536 17:15:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:55.536 17:15:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 349649 00:05:55.536 17:15:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:55.536 17:15:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:55.536 17:15:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 349649' 00:05:55.536 killing process with pid 349649 00:05:55.536 17:15:15 -- common/autotest_common.sh@955 -- # kill 349649 00:05:55.536 17:15:15 -- common/autotest_common.sh@960 -- # wait 349649 00:05:56.106 17:15:15 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 349649 00:05:56.106 17:15:15 -- common/autotest_common.sh@650 -- # local es=0 00:05:56.106 17:15:15 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 349649 00:05:56.106 17:15:15 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:56.106 17:15:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.106 17:15:15 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:56.106 17:15:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.106 17:15:15 -- common/autotest_common.sh@653 -- # waitforlisten 349649 00:05:56.106 17:15:15 -- common/autotest_common.sh@829 -- # '[' -z 349649 ']' 00:05:56.106 17:15:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.106 17:15:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.106 17:15:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.106 17:15:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.106 17:15:15 -- common/autotest_common.sh@10 -- # set +x 00:05:56.106 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (349649) - No such process 00:05:56.106 ERROR: process (pid: 349649) is no longer running 00:05:56.106 17:15:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.106 17:15:15 -- common/autotest_common.sh@862 -- # return 1 00:05:56.106 17:15:15 -- common/autotest_common.sh@653 -- # es=1 00:05:56.106 17:15:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:56.106 17:15:15 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:56.106 17:15:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:56.106 17:15:15 -- event/cpu_locks.sh@54 -- # no_locks 00:05:56.106 17:15:15 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:56.106 17:15:15 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:56.106 17:15:15 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:56.106 00:05:56.106 real 0m1.790s 00:05:56.106 user 0m1.907s 00:05:56.106 sys 0m0.607s 00:05:56.106 17:15:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.106 17:15:15 -- common/autotest_common.sh@10 -- # set +x 00:05:56.106 ************************************ 00:05:56.106 END TEST default_locks 00:05:56.106 ************************************ 00:05:56.106 17:15:15 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:56.106 17:15:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:56.106 17:15:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.106 17:15:15 -- common/autotest_common.sh@10 -- # set +x 00:05:56.106 ************************************ 00:05:56.106 START TEST default_locks_via_rpc 00:05:56.106 ************************************ 00:05:56.106 17:15:15 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:05:56.106 17:15:15 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=349965 00:05:56.106 17:15:15 -- event/cpu_locks.sh@63 -- # waitforlisten 349965 00:05:56.106 17:15:15 -- common/autotest_common.sh@829 -- # '[' -z 349965 ']' 00:05:56.106 17:15:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.106 17:15:15 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.106 17:15:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.106 17:15:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.106 17:15:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.106 17:15:15 -- common/autotest_common.sh@10 -- # set +x 00:05:56.106 [2024-11-09 17:15:15.647795] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:56.106 [2024-11-09 17:15:15.647859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid349965 ] 00:05:56.106 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.106 [2024-11-09 17:15:15.712112] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.106 [2024-11-09 17:15:15.788629] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:56.106 [2024-11-09 17:15:15.788731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.045 17:15:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.045 17:15:16 -- common/autotest_common.sh@862 -- # return 0 00:05:57.045 17:15:16 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:57.045 17:15:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.045 17:15:16 -- common/autotest_common.sh@10 -- # set +x 00:05:57.045 17:15:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.045 17:15:16 -- event/cpu_locks.sh@67 -- # no_locks 00:05:57.045 17:15:16 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:57.045 17:15:16 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:57.045 17:15:16 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:57.045 17:15:16 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:57.045 17:15:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.045 17:15:16 -- common/autotest_common.sh@10 -- # set +x 00:05:57.045 17:15:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.045 17:15:16 -- event/cpu_locks.sh@71 -- # locks_exist 349965 00:05:57.045 17:15:16 -- event/cpu_locks.sh@22 -- # lslocks -p 349965 00:05:57.045 17:15:16 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.304 17:15:16 -- event/cpu_locks.sh@73 -- # killprocess 349965 00:05:57.304 17:15:16 -- common/autotest_common.sh@936 -- # '[' -z 349965 ']' 00:05:57.304 17:15:16 -- common/autotest_common.sh@940 -- # kill -0 349965 00:05:57.304 17:15:16 -- common/autotest_common.sh@941 -- # uname 00:05:57.304 17:15:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:57.304 17:15:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 349965 00:05:57.304 17:15:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:57.304 17:15:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:57.304 17:15:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 349965' 00:05:57.304 killing process with pid 349965 00:05:57.304 17:15:16 -- common/autotest_common.sh@955 -- # kill 349965 00:05:57.304 17:15:16 -- common/autotest_common.sh@960 -- # wait 349965 00:05:57.564 00:05:57.564 real 0m1.554s 00:05:57.564 user 0m1.648s 00:05:57.564 sys 0m0.520s 00:05:57.564 17:15:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:57.564 17:15:17 -- common/autotest_common.sh@10 -- # set +x 00:05:57.564 ************************************ 00:05:57.564 END TEST default_locks_via_rpc 00:05:57.564 ************************************ 00:05:57.564 17:15:17 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:57.564 17:15:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.564 17:15:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.564 17:15:17 -- common/autotest_common.sh@10 -- # set +x 00:05:57.564 ************************************ 00:05:57.564 START TEST non_locking_app_on_locked_coremask 00:05:57.564 ************************************ 00:05:57.564 17:15:17 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:05:57.564 17:15:17 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=350274 00:05:57.564 17:15:17 -- event/cpu_locks.sh@81 -- # waitforlisten 350274 /var/tmp/spdk.sock 00:05:57.564 17:15:17 -- common/autotest_common.sh@829 -- # '[' -z 350274 ']' 00:05:57.564 17:15:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.564 17:15:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.564 17:15:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.564 17:15:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.564 17:15:17 -- common/autotest_common.sh@10 -- # set +x 00:05:57.564 17:15:17 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.564 [2024-11-09 17:15:17.254694] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:57.564 [2024-11-09 17:15:17.254770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid350274 ] 00:05:57.564 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.564 [2024-11-09 17:15:17.321461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.824 [2024-11-09 17:15:17.397402] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:57.824 [2024-11-09 17:15:17.397509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.393 17:15:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.393 17:15:18 -- common/autotest_common.sh@862 -- # return 0 00:05:58.393 17:15:18 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=350522 00:05:58.393 17:15:18 -- event/cpu_locks.sh@85 -- # waitforlisten 350522 /var/tmp/spdk2.sock 00:05:58.393 17:15:18 -- common/autotest_common.sh@829 -- # '[' -z 350522 ']' 00:05:58.393 17:15:18 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:58.393 17:15:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.393 17:15:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.393 17:15:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.393 17:15:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.393 17:15:18 -- common/autotest_common.sh@10 -- # set +x 00:05:58.393 [2024-11-09 17:15:18.092130] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:58.393 [2024-11-09 17:15:18.092178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid350522 ] 00:05:58.393 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.653 [2024-11-09 17:15:18.177114] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.653 [2024-11-09 17:15:18.177140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.653 [2024-11-09 17:15:18.328687] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:58.653 [2024-11-09 17:15:18.328793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.222 17:15:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.222 17:15:18 -- common/autotest_common.sh@862 -- # return 0 00:05:59.222 17:15:18 -- event/cpu_locks.sh@87 -- # locks_exist 350274 00:05:59.222 17:15:18 -- event/cpu_locks.sh@22 -- # lslocks -p 350274 00:05:59.222 17:15:18 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.600 lslocks: write error 00:06:00.600 17:15:20 -- event/cpu_locks.sh@89 -- # killprocess 350274 00:06:00.600 17:15:20 -- common/autotest_common.sh@936 -- # '[' -z 350274 ']' 00:06:00.600 17:15:20 -- common/autotest_common.sh@940 -- # kill -0 350274 00:06:00.600 17:15:20 -- common/autotest_common.sh@941 -- # uname 00:06:00.600 17:15:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:00.600 17:15:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 350274 00:06:00.600 17:15:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:00.600 17:15:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:00.600 17:15:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 350274' 00:06:00.600 killing process with pid 350274 00:06:00.600 17:15:20 -- common/autotest_common.sh@955 -- # kill 350274 00:06:00.600 17:15:20 -- common/autotest_common.sh@960 -- # wait 350274 00:06:01.168 17:15:20 -- event/cpu_locks.sh@90 -- # killprocess 350522 00:06:01.168 17:15:20 -- common/autotest_common.sh@936 -- # '[' -z 350522 ']' 00:06:01.168 17:15:20 -- common/autotest_common.sh@940 -- # kill -0 350522 00:06:01.168 17:15:20 -- common/autotest_common.sh@941 -- # uname 00:06:01.168 17:15:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:01.168 17:15:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 350522 00:06:01.168 17:15:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:01.168 17:15:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:01.168 17:15:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 350522' 00:06:01.168 killing process with pid 350522 00:06:01.168 17:15:20 -- common/autotest_common.sh@955 -- # kill 350522 00:06:01.168 17:15:20 -- common/autotest_common.sh@960 -- # wait 350522 00:06:01.829 00:06:01.829 real 0m3.989s 00:06:01.829 user 0m4.248s 00:06:01.829 sys 0m1.297s 00:06:01.829 17:15:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:01.829 17:15:21 -- common/autotest_common.sh@10 -- # set +x 00:06:01.829 ************************************ 00:06:01.829 END TEST non_locking_app_on_locked_coremask 00:06:01.829 ************************************ 00:06:01.829 17:15:21 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:01.829 17:15:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.829 17:15:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.829 17:15:21 -- common/autotest_common.sh@10 -- # set +x 00:06:01.829 ************************************ 00:06:01.829 START TEST locking_app_on_unlocked_coremask 00:06:01.829 ************************************ 00:06:01.829 17:15:21 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:01.829 17:15:21 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=351096 00:06:01.829 17:15:21 -- event/cpu_locks.sh@99 -- # waitforlisten 351096 /var/tmp/spdk.sock 00:06:01.829 17:15:21 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:01.829 17:15:21 -- common/autotest_common.sh@829 -- # '[' -z 351096 ']' 00:06:01.829 17:15:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.829 17:15:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.829 17:15:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.829 17:15:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.829 17:15:21 -- common/autotest_common.sh@10 -- # set +x 00:06:01.829 [2024-11-09 17:15:21.296589] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:01.829 [2024-11-09 17:15:21.296678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid351096 ] 00:06:01.829 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.829 [2024-11-09 17:15:21.365278] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:01.829 [2024-11-09 17:15:21.365312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.829 [2024-11-09 17:15:21.428994] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:01.830 [2024-11-09 17:15:21.429101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.398 17:15:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.398 17:15:22 -- common/autotest_common.sh@862 -- # return 0 00:06:02.398 17:15:22 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=351280 00:06:02.398 17:15:22 -- event/cpu_locks.sh@103 -- # waitforlisten 351280 /var/tmp/spdk2.sock 00:06:02.398 17:15:22 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:02.398 17:15:22 -- common/autotest_common.sh@829 -- # '[' -z 351280 ']' 00:06:02.398 17:15:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.398 17:15:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.398 17:15:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.398 17:15:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.398 17:15:22 -- common/autotest_common.sh@10 -- # set +x 00:06:02.398 [2024-11-09 17:15:22.140819] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:02.398 [2024-11-09 17:15:22.140874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid351280 ] 00:06:02.657 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.657 [2024-11-09 17:15:22.225106] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.657 [2024-11-09 17:15:22.367271] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:02.657 [2024-11-09 17:15:22.367376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.226 17:15:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.226 17:15:22 -- common/autotest_common.sh@862 -- # return 0 00:06:03.226 17:15:22 -- event/cpu_locks.sh@105 -- # locks_exist 351280 00:06:03.226 17:15:22 -- event/cpu_locks.sh@22 -- # lslocks -p 351280 00:06:03.226 17:15:22 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.163 lslocks: write error 00:06:04.163 17:15:23 -- event/cpu_locks.sh@107 -- # killprocess 351096 00:06:04.163 17:15:23 -- common/autotest_common.sh@936 -- # '[' -z 351096 ']' 00:06:04.163 17:15:23 -- common/autotest_common.sh@940 -- # kill -0 351096 00:06:04.163 17:15:23 -- common/autotest_common.sh@941 -- # uname 00:06:04.163 17:15:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:04.163 17:15:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 351096 00:06:04.163 17:15:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:04.163 17:15:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:04.163 17:15:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 351096' 00:06:04.163 killing process with pid 351096 00:06:04.163 17:15:23 -- common/autotest_common.sh@955 -- # kill 351096 00:06:04.163 17:15:23 -- common/autotest_common.sh@960 -- # wait 351096 00:06:04.732 17:15:24 -- event/cpu_locks.sh@108 -- # killprocess 351280 00:06:04.732 17:15:24 -- common/autotest_common.sh@936 -- # '[' -z 351280 ']' 00:06:04.732 17:15:24 -- common/autotest_common.sh@940 -- # kill -0 351280 00:06:04.732 17:15:24 -- common/autotest_common.sh@941 -- # uname 00:06:04.732 17:15:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:04.732 17:15:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 351280 00:06:04.732 17:15:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:04.732 17:15:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:04.732 17:15:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 351280' 00:06:04.732 killing process with pid 351280 00:06:04.732 17:15:24 -- common/autotest_common.sh@955 -- # kill 351280 00:06:04.732 17:15:24 -- common/autotest_common.sh@960 -- # wait 351280 00:06:04.991 00:06:04.991 real 0m3.373s 00:06:04.991 user 0m3.607s 00:06:04.991 sys 0m1.041s 00:06:04.991 17:15:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.991 17:15:24 -- common/autotest_common.sh@10 -- # set +x 00:06:04.991 ************************************ 00:06:04.991 END TEST locking_app_on_unlocked_coremask 00:06:04.991 ************************************ 00:06:04.991 17:15:24 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:04.991 17:15:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.991 17:15:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.991 17:15:24 -- common/autotest_common.sh@10 -- # set +x 00:06:04.991 ************************************ 00:06:04.991 START TEST locking_app_on_locked_coremask 00:06:04.991 ************************************ 00:06:04.991 17:15:24 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:04.991 17:15:24 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=351682 00:06:04.991 17:15:24 -- event/cpu_locks.sh@116 -- # waitforlisten 351682 /var/tmp/spdk.sock 00:06:04.991 17:15:24 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.991 17:15:24 -- common/autotest_common.sh@829 -- # '[' -z 351682 ']' 00:06:04.991 17:15:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.991 17:15:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.991 17:15:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.991 17:15:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.991 17:15:24 -- common/autotest_common.sh@10 -- # set +x 00:06:04.991 [2024-11-09 17:15:24.720237] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:04.991 [2024-11-09 17:15:24.720310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid351682 ] 00:06:04.991 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.250 [2024-11-09 17:15:24.788169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.250 [2024-11-09 17:15:24.855146] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:05.250 [2024-11-09 17:15:24.855260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.819 17:15:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.819 17:15:25 -- common/autotest_common.sh@862 -- # return 0 00:06:05.819 17:15:25 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=351944 00:06:05.819 17:15:25 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 351944 /var/tmp/spdk2.sock 00:06:05.819 17:15:25 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:05.819 17:15:25 -- common/autotest_common.sh@650 -- # local es=0 00:06:05.819 17:15:25 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 351944 /var/tmp/spdk2.sock 00:06:05.819 17:15:25 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:05.819 17:15:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.819 17:15:25 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:05.819 17:15:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.819 17:15:25 -- common/autotest_common.sh@653 -- # waitforlisten 351944 /var/tmp/spdk2.sock 00:06:05.819 17:15:25 -- common/autotest_common.sh@829 -- # '[' -z 351944 ']' 00:06:05.819 17:15:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.819 17:15:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.819 17:15:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.819 17:15:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.819 17:15:25 -- common/autotest_common.sh@10 -- # set +x 00:06:05.819 [2024-11-09 17:15:25.575994] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:05.819 [2024-11-09 17:15:25.576070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid351944 ] 00:06:06.078 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.078 [2024-11-09 17:15:25.670070] app.c: 666:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 351682 has claimed it. 00:06:06.078 [2024-11-09 17:15:25.670112] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:06.646 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (351944) - No such process 00:06:06.646 ERROR: process (pid: 351944) is no longer running 00:06:06.646 17:15:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.646 17:15:26 -- common/autotest_common.sh@862 -- # return 1 00:06:06.646 17:15:26 -- common/autotest_common.sh@653 -- # es=1 00:06:06.646 17:15:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:06.646 17:15:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:06.646 17:15:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:06.646 17:15:26 -- event/cpu_locks.sh@122 -- # locks_exist 351682 00:06:06.646 17:15:26 -- event/cpu_locks.sh@22 -- # lslocks -p 351682 00:06:06.646 17:15:26 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.214 lslocks: write error 00:06:07.214 17:15:26 -- event/cpu_locks.sh@124 -- # killprocess 351682 00:06:07.214 17:15:26 -- common/autotest_common.sh@936 -- # '[' -z 351682 ']' 00:06:07.214 17:15:26 -- common/autotest_common.sh@940 -- # kill -0 351682 00:06:07.214 17:15:26 -- common/autotest_common.sh@941 -- # uname 00:06:07.214 17:15:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:07.214 17:15:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 351682 00:06:07.214 17:15:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:07.214 17:15:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:07.214 17:15:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 351682' 00:06:07.214 killing process with pid 351682 00:06:07.214 17:15:26 -- common/autotest_common.sh@955 -- # kill 351682 00:06:07.214 17:15:26 -- common/autotest_common.sh@960 -- # wait 351682 00:06:07.474 00:06:07.474 real 0m2.512s 00:06:07.474 user 0m2.750s 00:06:07.474 sys 0m0.765s 00:06:07.474 17:15:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.474 17:15:27 -- common/autotest_common.sh@10 -- # set +x 00:06:07.474 ************************************ 00:06:07.474 END TEST locking_app_on_locked_coremask 00:06:07.474 ************************************ 00:06:07.733 17:15:27 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:07.733 17:15:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.733 17:15:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.733 17:15:27 -- common/autotest_common.sh@10 -- # set +x 00:06:07.733 ************************************ 00:06:07.733 START TEST locking_overlapped_coremask 00:06:07.733 ************************************ 00:06:07.733 17:15:27 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:07.733 17:15:27 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=352250 00:06:07.733 17:15:27 -- event/cpu_locks.sh@133 -- # waitforlisten 352250 /var/tmp/spdk.sock 00:06:07.733 17:15:27 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:07.733 17:15:27 -- common/autotest_common.sh@829 -- # '[' -z 352250 ']' 00:06:07.733 17:15:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.733 17:15:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.733 17:15:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.733 17:15:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.733 17:15:27 -- common/autotest_common.sh@10 -- # set +x 00:06:07.733 [2024-11-09 17:15:27.279847] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:07.733 [2024-11-09 17:15:27.279919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid352250 ] 00:06:07.733 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.733 [2024-11-09 17:15:27.348071] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.733 [2024-11-09 17:15:27.413123] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:07.733 [2024-11-09 17:15:27.413269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.733 [2024-11-09 17:15:27.413365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.733 [2024-11-09 17:15:27.413366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.671 17:15:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.671 17:15:28 -- common/autotest_common.sh@862 -- # return 0 00:06:08.671 17:15:28 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=352377 00:06:08.671 17:15:28 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 352377 /var/tmp/spdk2.sock 00:06:08.671 17:15:28 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:08.671 17:15:28 -- common/autotest_common.sh@650 -- # local es=0 00:06:08.671 17:15:28 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 352377 /var/tmp/spdk2.sock 00:06:08.671 17:15:28 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:08.671 17:15:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.671 17:15:28 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:08.671 17:15:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.671 17:15:28 -- common/autotest_common.sh@653 -- # waitforlisten 352377 /var/tmp/spdk2.sock 00:06:08.671 17:15:28 -- common/autotest_common.sh@829 -- # '[' -z 352377 ']' 00:06:08.671 17:15:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.671 17:15:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.671 17:15:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.671 17:15:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.671 17:15:28 -- common/autotest_common.sh@10 -- # set +x 00:06:08.671 [2024-11-09 17:15:28.138231] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.671 [2024-11-09 17:15:28.138320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid352377 ] 00:06:08.671 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.671 [2024-11-09 17:15:28.231789] app.c: 666:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 352250 has claimed it. 00:06:08.671 [2024-11-09 17:15:28.231832] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:09.241 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (352377) - No such process 00:06:09.241 ERROR: process (pid: 352377) is no longer running 00:06:09.241 17:15:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.241 17:15:28 -- common/autotest_common.sh@862 -- # return 1 00:06:09.241 17:15:28 -- common/autotest_common.sh@653 -- # es=1 00:06:09.241 17:15:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.241 17:15:28 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:09.241 17:15:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.241 17:15:28 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:09.241 17:15:28 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:09.241 17:15:28 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:09.241 17:15:28 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:09.241 17:15:28 -- event/cpu_locks.sh@141 -- # killprocess 352250 00:06:09.241 17:15:28 -- common/autotest_common.sh@936 -- # '[' -z 352250 ']' 00:06:09.241 17:15:28 -- common/autotest_common.sh@940 -- # kill -0 352250 00:06:09.241 17:15:28 -- common/autotest_common.sh@941 -- # uname 00:06:09.241 17:15:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:09.241 17:15:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 352250 00:06:09.241 17:15:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:09.241 17:15:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:09.241 17:15:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 352250' 00:06:09.241 killing process with pid 352250 00:06:09.241 17:15:28 -- common/autotest_common.sh@955 -- # kill 352250 00:06:09.241 17:15:28 -- common/autotest_common.sh@960 -- # wait 352250 00:06:09.500 00:06:09.500 real 0m1.903s 00:06:09.500 user 0m5.430s 00:06:09.500 sys 0m0.440s 00:06:09.500 17:15:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:09.500 17:15:29 -- common/autotest_common.sh@10 -- # set +x 00:06:09.500 ************************************ 00:06:09.500 END TEST locking_overlapped_coremask 00:06:09.500 ************************************ 00:06:09.500 17:15:29 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:09.500 17:15:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:09.500 17:15:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.500 17:15:29 -- common/autotest_common.sh@10 -- # set +x 00:06:09.500 ************************************ 00:06:09.500 START TEST locking_overlapped_coremask_via_rpc 00:06:09.500 ************************************ 00:06:09.500 17:15:29 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:09.500 17:15:29 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=352560 00:06:09.500 17:15:29 -- event/cpu_locks.sh@149 -- # waitforlisten 352560 /var/tmp/spdk.sock 00:06:09.500 17:15:29 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:09.500 17:15:29 -- common/autotest_common.sh@829 -- # '[' -z 352560 ']' 00:06:09.500 17:15:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.500 17:15:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.500 17:15:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.501 17:15:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.501 17:15:29 -- common/autotest_common.sh@10 -- # set +x 00:06:09.501 [2024-11-09 17:15:29.234043] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:09.501 [2024-11-09 17:15:29.234114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid352560 ] 00:06:09.501 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.760 [2024-11-09 17:15:29.301361] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.760 [2024-11-09 17:15:29.301387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.760 [2024-11-09 17:15:29.377382] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:09.760 [2024-11-09 17:15:29.377513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.760 [2024-11-09 17:15:29.377611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.760 [2024-11-09 17:15:29.377614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.327 17:15:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.327 17:15:30 -- common/autotest_common.sh@862 -- # return 0 00:06:10.327 17:15:30 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=352827 00:06:10.327 17:15:30 -- event/cpu_locks.sh@153 -- # waitforlisten 352827 /var/tmp/spdk2.sock 00:06:10.328 17:15:30 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:10.328 17:15:30 -- common/autotest_common.sh@829 -- # '[' -z 352827 ']' 00:06:10.328 17:15:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.328 17:15:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.328 17:15:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.328 17:15:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.328 17:15:30 -- common/autotest_common.sh@10 -- # set +x 00:06:10.587 [2024-11-09 17:15:30.099311] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:10.587 [2024-11-09 17:15:30.099404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid352827 ] 00:06:10.587 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.587 [2024-11-09 17:15:30.191558] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.587 [2024-11-09 17:15:30.191589] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.587 [2024-11-09 17:15:30.345046] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:10.587 [2024-11-09 17:15:30.345186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:10.587 [2024-11-09 17:15:30.345300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.587 [2024-11-09 17:15:30.345302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:11.524 17:15:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.524 17:15:30 -- common/autotest_common.sh@862 -- # return 0 00:06:11.524 17:15:30 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:11.524 17:15:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.524 17:15:30 -- common/autotest_common.sh@10 -- # set +x 00:06:11.524 17:15:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.524 17:15:30 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.524 17:15:30 -- common/autotest_common.sh@650 -- # local es=0 00:06:11.524 17:15:30 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.524 17:15:30 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:11.524 17:15:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.524 17:15:30 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:11.524 17:15:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.524 17:15:30 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.524 17:15:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.524 17:15:30 -- common/autotest_common.sh@10 -- # set +x 00:06:11.524 [2024-11-09 17:15:30.969591] app.c: 666:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 352560 has claimed it. 00:06:11.524 request: 00:06:11.524 { 00:06:11.524 "method": "framework_enable_cpumask_locks", 00:06:11.524 "req_id": 1 00:06:11.524 } 00:06:11.524 Got JSON-RPC error response 00:06:11.524 response: 00:06:11.524 { 00:06:11.524 "code": -32603, 00:06:11.524 "message": "Failed to claim CPU core: 2" 00:06:11.524 } 00:06:11.524 17:15:30 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:11.524 17:15:30 -- common/autotest_common.sh@653 -- # es=1 00:06:11.524 17:15:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:11.524 17:15:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:11.524 17:15:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:11.524 17:15:30 -- event/cpu_locks.sh@158 -- # waitforlisten 352560 /var/tmp/spdk.sock 00:06:11.524 17:15:30 -- common/autotest_common.sh@829 -- # '[' -z 352560 ']' 00:06:11.524 17:15:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.524 17:15:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.524 17:15:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.524 17:15:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.524 17:15:30 -- common/autotest_common.sh@10 -- # set +x 00:06:11.524 17:15:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.524 17:15:31 -- common/autotest_common.sh@862 -- # return 0 00:06:11.524 17:15:31 -- event/cpu_locks.sh@159 -- # waitforlisten 352827 /var/tmp/spdk2.sock 00:06:11.524 17:15:31 -- common/autotest_common.sh@829 -- # '[' -z 352827 ']' 00:06:11.524 17:15:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.524 17:15:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.524 17:15:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.524 17:15:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.524 17:15:31 -- common/autotest_common.sh@10 -- # set +x 00:06:11.784 17:15:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.784 17:15:31 -- common/autotest_common.sh@862 -- # return 0 00:06:11.784 17:15:31 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:11.784 17:15:31 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:11.784 17:15:31 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:11.784 17:15:31 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:11.784 00:06:11.784 real 0m2.160s 00:06:11.784 user 0m0.882s 00:06:11.784 sys 0m0.204s 00:06:11.784 17:15:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.784 17:15:31 -- common/autotest_common.sh@10 -- # set +x 00:06:11.784 ************************************ 00:06:11.784 END TEST locking_overlapped_coremask_via_rpc 00:06:11.784 ************************************ 00:06:11.784 17:15:31 -- event/cpu_locks.sh@174 -- # cleanup 00:06:11.784 17:15:31 -- event/cpu_locks.sh@15 -- # [[ -z 352560 ]] 00:06:11.784 17:15:31 -- event/cpu_locks.sh@15 -- # killprocess 352560 00:06:11.784 17:15:31 -- common/autotest_common.sh@936 -- # '[' -z 352560 ']' 00:06:11.784 17:15:31 -- common/autotest_common.sh@940 -- # kill -0 352560 00:06:11.784 17:15:31 -- common/autotest_common.sh@941 -- # uname 00:06:11.784 17:15:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:11.784 17:15:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 352560 00:06:11.784 17:15:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:11.784 17:15:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:11.784 17:15:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 352560' 00:06:11.784 killing process with pid 352560 00:06:11.784 17:15:31 -- common/autotest_common.sh@955 -- # kill 352560 00:06:11.784 17:15:31 -- common/autotest_common.sh@960 -- # wait 352560 00:06:12.043 17:15:31 -- event/cpu_locks.sh@16 -- # [[ -z 352827 ]] 00:06:12.043 17:15:31 -- event/cpu_locks.sh@16 -- # killprocess 352827 00:06:12.043 17:15:31 -- common/autotest_common.sh@936 -- # '[' -z 352827 ']' 00:06:12.043 17:15:31 -- common/autotest_common.sh@940 -- # kill -0 352827 00:06:12.043 17:15:31 -- common/autotest_common.sh@941 -- # uname 00:06:12.043 17:15:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:12.043 17:15:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 352827 00:06:12.303 17:15:31 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:12.303 17:15:31 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:12.303 17:15:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 352827' 00:06:12.303 killing process with pid 352827 00:06:12.303 17:15:31 -- common/autotest_common.sh@955 -- # kill 352827 00:06:12.303 17:15:31 -- common/autotest_common.sh@960 -- # wait 352827 00:06:12.562 17:15:32 -- event/cpu_locks.sh@18 -- # rm -f 00:06:12.562 17:15:32 -- event/cpu_locks.sh@1 -- # cleanup 00:06:12.562 17:15:32 -- event/cpu_locks.sh@15 -- # [[ -z 352560 ]] 00:06:12.562 17:15:32 -- event/cpu_locks.sh@15 -- # killprocess 352560 00:06:12.562 17:15:32 -- common/autotest_common.sh@936 -- # '[' -z 352560 ']' 00:06:12.562 17:15:32 -- common/autotest_common.sh@940 -- # kill -0 352560 00:06:12.563 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (352560) - No such process 00:06:12.563 17:15:32 -- common/autotest_common.sh@963 -- # echo 'Process with pid 352560 is not found' 00:06:12.563 Process with pid 352560 is not found 00:06:12.563 17:15:32 -- event/cpu_locks.sh@16 -- # [[ -z 352827 ]] 00:06:12.563 17:15:32 -- event/cpu_locks.sh@16 -- # killprocess 352827 00:06:12.563 17:15:32 -- common/autotest_common.sh@936 -- # '[' -z 352827 ']' 00:06:12.563 17:15:32 -- common/autotest_common.sh@940 -- # kill -0 352827 00:06:12.563 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (352827) - No such process 00:06:12.563 17:15:32 -- common/autotest_common.sh@963 -- # echo 'Process with pid 352827 is not found' 00:06:12.563 Process with pid 352827 is not found 00:06:12.563 17:15:32 -- event/cpu_locks.sh@18 -- # rm -f 00:06:12.563 00:06:12.563 real 0m18.562s 00:06:12.563 user 0m31.436s 00:06:12.563 sys 0m5.855s 00:06:12.563 17:15:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:12.563 17:15:32 -- common/autotest_common.sh@10 -- # set +x 00:06:12.563 ************************************ 00:06:12.563 END TEST cpu_locks 00:06:12.563 ************************************ 00:06:12.563 00:06:12.563 real 0m44.385s 00:06:12.563 user 1m23.835s 00:06:12.563 sys 0m10.006s 00:06:12.563 17:15:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:12.563 17:15:32 -- common/autotest_common.sh@10 -- # set +x 00:06:12.563 ************************************ 00:06:12.563 END TEST event 00:06:12.563 ************************************ 00:06:12.563 17:15:32 -- spdk/autotest.sh@175 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:12.563 17:15:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:12.563 17:15:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.563 17:15:32 -- common/autotest_common.sh@10 -- # set +x 00:06:12.563 ************************************ 00:06:12.563 START TEST thread 00:06:12.563 ************************************ 00:06:12.563 17:15:32 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:12.822 * Looking for test storage... 00:06:12.822 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:06:12.822 17:15:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:12.822 17:15:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:12.822 17:15:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:12.822 17:15:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:12.822 17:15:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:12.822 17:15:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:12.822 17:15:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:12.822 17:15:32 -- scripts/common.sh@335 -- # IFS=.-: 00:06:12.822 17:15:32 -- scripts/common.sh@335 -- # read -ra ver1 00:06:12.822 17:15:32 -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.822 17:15:32 -- scripts/common.sh@336 -- # read -ra ver2 00:06:12.822 17:15:32 -- scripts/common.sh@337 -- # local 'op=<' 00:06:12.822 17:15:32 -- scripts/common.sh@339 -- # ver1_l=2 00:06:12.822 17:15:32 -- scripts/common.sh@340 -- # ver2_l=1 00:06:12.822 17:15:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:12.822 17:15:32 -- scripts/common.sh@343 -- # case "$op" in 00:06:12.822 17:15:32 -- scripts/common.sh@344 -- # : 1 00:06:12.822 17:15:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:12.822 17:15:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.822 17:15:32 -- scripts/common.sh@364 -- # decimal 1 00:06:12.822 17:15:32 -- scripts/common.sh@352 -- # local d=1 00:06:12.822 17:15:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.822 17:15:32 -- scripts/common.sh@354 -- # echo 1 00:06:12.822 17:15:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:12.822 17:15:32 -- scripts/common.sh@365 -- # decimal 2 00:06:12.822 17:15:32 -- scripts/common.sh@352 -- # local d=2 00:06:12.822 17:15:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.822 17:15:32 -- scripts/common.sh@354 -- # echo 2 00:06:12.822 17:15:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:12.822 17:15:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:12.822 17:15:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:12.822 17:15:32 -- scripts/common.sh@367 -- # return 0 00:06:12.822 17:15:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.822 17:15:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:12.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.822 --rc genhtml_branch_coverage=1 00:06:12.822 --rc genhtml_function_coverage=1 00:06:12.822 --rc genhtml_legend=1 00:06:12.822 --rc geninfo_all_blocks=1 00:06:12.822 --rc geninfo_unexecuted_blocks=1 00:06:12.822 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:12.822 ' 00:06:12.822 17:15:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:12.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.822 --rc genhtml_branch_coverage=1 00:06:12.822 --rc genhtml_function_coverage=1 00:06:12.822 --rc genhtml_legend=1 00:06:12.822 --rc geninfo_all_blocks=1 00:06:12.822 --rc geninfo_unexecuted_blocks=1 00:06:12.822 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:12.822 ' 00:06:12.822 17:15:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:12.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.822 --rc genhtml_branch_coverage=1 00:06:12.822 --rc genhtml_function_coverage=1 00:06:12.822 --rc genhtml_legend=1 00:06:12.823 --rc geninfo_all_blocks=1 00:06:12.823 --rc geninfo_unexecuted_blocks=1 00:06:12.823 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:12.823 ' 00:06:12.823 17:15:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:12.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.823 --rc genhtml_branch_coverage=1 00:06:12.823 --rc genhtml_function_coverage=1 00:06:12.823 --rc genhtml_legend=1 00:06:12.823 --rc geninfo_all_blocks=1 00:06:12.823 --rc geninfo_unexecuted_blocks=1 00:06:12.823 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:12.823 ' 00:06:12.823 17:15:32 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:12.823 17:15:32 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:12.823 17:15:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.823 17:15:32 -- common/autotest_common.sh@10 -- # set +x 00:06:12.823 ************************************ 00:06:12.823 START TEST thread_poller_perf 00:06:12.823 ************************************ 00:06:12.823 17:15:32 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:12.823 [2024-11-09 17:15:32.462276] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.823 [2024-11-09 17:15:32.462366] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid353208 ] 00:06:12.823 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.823 [2024-11-09 17:15:32.532439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.084 [2024-11-09 17:15:32.603627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.084 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:14.022 [2024-11-09T16:15:33.792Z] ====================================== 00:06:14.022 [2024-11-09T16:15:33.792Z] busy:2505501846 (cyc) 00:06:14.022 [2024-11-09T16:15:33.792Z] total_run_count: 795000 00:06:14.022 [2024-11-09T16:15:33.792Z] tsc_hz: 2500000000 (cyc) 00:06:14.022 [2024-11-09T16:15:33.792Z] ====================================== 00:06:14.022 [2024-11-09T16:15:33.792Z] poller_cost: 3151 (cyc), 1260 (nsec) 00:06:14.022 00:06:14.022 real 0m1.227s 00:06:14.022 user 0m1.142s 00:06:14.022 sys 0m0.081s 00:06:14.022 17:15:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.022 17:15:33 -- common/autotest_common.sh@10 -- # set +x 00:06:14.022 ************************************ 00:06:14.022 END TEST thread_poller_perf 00:06:14.022 ************************************ 00:06:14.022 17:15:33 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:14.022 17:15:33 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:14.022 17:15:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.022 17:15:33 -- common/autotest_common.sh@10 -- # set +x 00:06:14.022 ************************************ 00:06:14.022 START TEST thread_poller_perf 00:06:14.022 ************************************ 00:06:14.022 17:15:33 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:14.022 [2024-11-09 17:15:33.732211] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:14.022 [2024-11-09 17:15:33.732334] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid353496 ] 00:06:14.022 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.282 [2024-11-09 17:15:33.801669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.282 [2024-11-09 17:15:33.868843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.282 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:15.219 [2024-11-09T16:15:34.989Z] ====================================== 00:06:15.219 [2024-11-09T16:15:34.989Z] busy:2501962914 (cyc) 00:06:15.219 [2024-11-09T16:15:34.989Z] total_run_count: 13257000 00:06:15.219 [2024-11-09T16:15:34.989Z] tsc_hz: 2500000000 (cyc) 00:06:15.219 [2024-11-09T16:15:34.989Z] ====================================== 00:06:15.219 [2024-11-09T16:15:34.989Z] poller_cost: 188 (cyc), 75 (nsec) 00:06:15.219 00:06:15.219 real 0m1.222s 00:06:15.219 user 0m1.135s 00:06:15.219 sys 0m0.082s 00:06:15.219 17:15:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:15.219 17:15:34 -- common/autotest_common.sh@10 -- # set +x 00:06:15.219 ************************************ 00:06:15.219 END TEST thread_poller_perf 00:06:15.219 ************************************ 00:06:15.219 17:15:34 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:06:15.219 17:15:34 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:15.219 17:15:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.219 17:15:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.219 17:15:34 -- common/autotest_common.sh@10 -- # set +x 00:06:15.219 ************************************ 00:06:15.219 START TEST thread_spdk_lock 00:06:15.219 ************************************ 00:06:15.219 17:15:34 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:15.219 [2024-11-09 17:15:34.988106] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:15.219 [2024-11-09 17:15:34.988168] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid353779 ] 00:06:15.478 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.478 [2024-11-09 17:15:35.053392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.478 [2024-11-09 17:15:35.122102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.478 [2024-11-09 17:15:35.122104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.048 [2024-11-09 17:15:35.606944] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 957:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:16.048 [2024-11-09 17:15:35.606982] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3064:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:16.048 [2024-11-09 17:15:35.606993] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x1483c80 00:06:16.048 [2024-11-09 17:15:35.607896] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 852:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:16.048 [2024-11-09 17:15:35.608001] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1018:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:16.048 [2024-11-09 17:15:35.608021] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 852:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:16.048 Starting test contend 00:06:16.048 Worker Delay Wait us Hold us Total us 00:06:16.048 0 3 169461 184714 354176 00:06:16.048 1 5 88925 285026 373952 00:06:16.048 PASS test contend 00:06:16.048 Starting test hold_by_poller 00:06:16.048 PASS test hold_by_poller 00:06:16.048 Starting test hold_by_message 00:06:16.048 PASS test hold_by_message 00:06:16.048 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:06:16.048 100014 assertions passed 00:06:16.048 0 assertions failed 00:06:16.048 00:06:16.048 real 0m0.689s 00:06:16.048 user 0m1.093s 00:06:16.048 sys 0m0.079s 00:06:16.048 17:15:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:16.048 17:15:35 -- common/autotest_common.sh@10 -- # set +x 00:06:16.048 ************************************ 00:06:16.048 END TEST thread_spdk_lock 00:06:16.048 ************************************ 00:06:16.048 00:06:16.048 real 0m3.443s 00:06:16.048 user 0m3.504s 00:06:16.048 sys 0m0.450s 00:06:16.048 17:15:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:16.048 17:15:35 -- common/autotest_common.sh@10 -- # set +x 00:06:16.048 ************************************ 00:06:16.048 END TEST thread 00:06:16.048 ************************************ 00:06:16.048 17:15:35 -- spdk/autotest.sh@176 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:06:16.048 17:15:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:16.048 17:15:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.048 17:15:35 -- common/autotest_common.sh@10 -- # set +x 00:06:16.048 ************************************ 00:06:16.048 START TEST accel 00:06:16.048 ************************************ 00:06:16.048 17:15:35 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:06:16.308 * Looking for test storage... 00:06:16.308 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:06:16.308 17:15:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:16.308 17:15:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:16.308 17:15:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:16.308 17:15:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:16.308 17:15:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:16.308 17:15:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:16.308 17:15:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:16.308 17:15:35 -- scripts/common.sh@335 -- # IFS=.-: 00:06:16.308 17:15:35 -- scripts/common.sh@335 -- # read -ra ver1 00:06:16.308 17:15:35 -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.308 17:15:35 -- scripts/common.sh@336 -- # read -ra ver2 00:06:16.308 17:15:35 -- scripts/common.sh@337 -- # local 'op=<' 00:06:16.308 17:15:35 -- scripts/common.sh@339 -- # ver1_l=2 00:06:16.308 17:15:35 -- scripts/common.sh@340 -- # ver2_l=1 00:06:16.308 17:15:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:16.308 17:15:35 -- scripts/common.sh@343 -- # case "$op" in 00:06:16.308 17:15:35 -- scripts/common.sh@344 -- # : 1 00:06:16.308 17:15:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:16.308 17:15:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.308 17:15:35 -- scripts/common.sh@364 -- # decimal 1 00:06:16.308 17:15:35 -- scripts/common.sh@352 -- # local d=1 00:06:16.308 17:15:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.308 17:15:35 -- scripts/common.sh@354 -- # echo 1 00:06:16.308 17:15:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:16.308 17:15:35 -- scripts/common.sh@365 -- # decimal 2 00:06:16.308 17:15:35 -- scripts/common.sh@352 -- # local d=2 00:06:16.308 17:15:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.308 17:15:35 -- scripts/common.sh@354 -- # echo 2 00:06:16.308 17:15:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:16.308 17:15:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:16.308 17:15:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:16.308 17:15:35 -- scripts/common.sh@367 -- # return 0 00:06:16.308 17:15:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.308 17:15:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:16.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.308 --rc genhtml_branch_coverage=1 00:06:16.308 --rc genhtml_function_coverage=1 00:06:16.308 --rc genhtml_legend=1 00:06:16.308 --rc geninfo_all_blocks=1 00:06:16.308 --rc geninfo_unexecuted_blocks=1 00:06:16.308 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:16.308 ' 00:06:16.308 17:15:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:16.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.308 --rc genhtml_branch_coverage=1 00:06:16.308 --rc genhtml_function_coverage=1 00:06:16.308 --rc genhtml_legend=1 00:06:16.308 --rc geninfo_all_blocks=1 00:06:16.308 --rc geninfo_unexecuted_blocks=1 00:06:16.308 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:16.308 ' 00:06:16.308 17:15:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:16.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.308 --rc genhtml_branch_coverage=1 00:06:16.308 --rc genhtml_function_coverage=1 00:06:16.308 --rc genhtml_legend=1 00:06:16.308 --rc geninfo_all_blocks=1 00:06:16.308 --rc geninfo_unexecuted_blocks=1 00:06:16.308 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:16.308 ' 00:06:16.308 17:15:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:16.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.308 --rc genhtml_branch_coverage=1 00:06:16.308 --rc genhtml_function_coverage=1 00:06:16.308 --rc genhtml_legend=1 00:06:16.308 --rc geninfo_all_blocks=1 00:06:16.308 --rc geninfo_unexecuted_blocks=1 00:06:16.308 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:16.308 ' 00:06:16.308 17:15:35 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:16.308 17:15:35 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:16.308 17:15:35 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:16.308 17:15:35 -- accel/accel.sh@59 -- # spdk_tgt_pid=354013 00:06:16.308 17:15:35 -- accel/accel.sh@60 -- # waitforlisten 354013 00:06:16.308 17:15:35 -- common/autotest_common.sh@829 -- # '[' -z 354013 ']' 00:06:16.308 17:15:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.309 17:15:35 -- accel/accel.sh@58 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:16.309 17:15:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.309 17:15:35 -- accel/accel.sh@58 -- # build_accel_config 00:06:16.309 17:15:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.309 17:15:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.309 17:15:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.309 17:15:35 -- common/autotest_common.sh@10 -- # set +x 00:06:16.309 17:15:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.309 17:15:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.309 17:15:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.309 17:15:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.309 17:15:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.309 17:15:35 -- accel/accel.sh@42 -- # jq -r . 00:06:16.309 [2024-11-09 17:15:35.949643] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.309 [2024-11-09 17:15:35.949711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid354013 ] 00:06:16.309 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.309 [2024-11-09 17:15:36.015759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.568 [2024-11-09 17:15:36.087489] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:16.568 [2024-11-09 17:15:36.087596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.137 17:15:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.137 17:15:36 -- common/autotest_common.sh@862 -- # return 0 00:06:17.137 17:15:36 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:17.137 17:15:36 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:17.137 17:15:36 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:17.137 17:15:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.137 17:15:36 -- common/autotest_common.sh@10 -- # set +x 00:06:17.137 17:15:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.137 17:15:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.137 17:15:36 -- accel/accel.sh@64 -- # IFS== 00:06:17.137 17:15:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:17.137 17:15:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:17.137 17:15:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.137 17:15:36 -- accel/accel.sh@64 -- # IFS== 00:06:17.137 17:15:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:17.137 17:15:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:17.137 17:15:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.137 17:15:36 -- accel/accel.sh@64 -- # IFS== 00:06:17.137 17:15:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:17.137 17:15:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:17.137 17:15:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.137 17:15:36 -- accel/accel.sh@64 -- # IFS== 00:06:17.137 17:15:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:17.137 17:15:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:17.137 17:15:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.137 17:15:36 -- accel/accel.sh@64 -- # IFS== 00:06:17.137 17:15:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:17.137 17:15:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:17.137 17:15:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.137 17:15:36 -- accel/accel.sh@64 -- # IFS== 00:06:17.137 17:15:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:17.137 17:15:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:17.137 17:15:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.137 17:15:36 -- accel/accel.sh@64 -- # IFS== 00:06:17.137 17:15:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:17.137 17:15:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:17.137 17:15:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.137 17:15:36 -- accel/accel.sh@64 -- # IFS== 00:06:17.137 17:15:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:17.137 17:15:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:17.137 17:15:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.137 17:15:36 -- accel/accel.sh@64 -- # IFS== 00:06:17.137 17:15:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:17.137 17:15:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:17.137 17:15:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.137 17:15:36 -- accel/accel.sh@64 -- # IFS== 00:06:17.137 17:15:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:17.137 17:15:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:17.137 17:15:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.137 17:15:36 -- accel/accel.sh@64 -- # IFS== 00:06:17.137 17:15:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:17.137 17:15:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:17.137 17:15:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.137 17:15:36 -- accel/accel.sh@64 -- # IFS== 00:06:17.137 17:15:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:17.137 17:15:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:17.137 17:15:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.137 17:15:36 -- accel/accel.sh@64 -- # IFS== 00:06:17.137 17:15:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:17.137 17:15:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:17.137 17:15:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.137 17:15:36 -- accel/accel.sh@64 -- # IFS== 00:06:17.137 17:15:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:17.137 17:15:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:17.137 17:15:36 -- accel/accel.sh@67 -- # killprocess 354013 00:06:17.137 17:15:36 -- common/autotest_common.sh@936 -- # '[' -z 354013 ']' 00:06:17.137 17:15:36 -- common/autotest_common.sh@940 -- # kill -0 354013 00:06:17.137 17:15:36 -- common/autotest_common.sh@941 -- # uname 00:06:17.137 17:15:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:17.137 17:15:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 354013 00:06:17.137 17:15:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:17.137 17:15:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:17.137 17:15:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 354013' 00:06:17.137 killing process with pid 354013 00:06:17.137 17:15:36 -- common/autotest_common.sh@955 -- # kill 354013 00:06:17.137 17:15:36 -- common/autotest_common.sh@960 -- # wait 354013 00:06:17.706 17:15:37 -- accel/accel.sh@68 -- # trap - ERR 00:06:17.706 17:15:37 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:17.706 17:15:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:17.706 17:15:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.706 17:15:37 -- common/autotest_common.sh@10 -- # set +x 00:06:17.706 17:15:37 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:17.706 17:15:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:17.706 17:15:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.706 17:15:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.706 17:15:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.706 17:15:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.706 17:15:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.706 17:15:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.706 17:15:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.706 17:15:37 -- accel/accel.sh@42 -- # jq -r . 00:06:17.706 17:15:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:17.706 17:15:37 -- common/autotest_common.sh@10 -- # set +x 00:06:17.706 17:15:37 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:17.706 17:15:37 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:17.706 17:15:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.706 17:15:37 -- common/autotest_common.sh@10 -- # set +x 00:06:17.706 ************************************ 00:06:17.706 START TEST accel_missing_filename 00:06:17.706 ************************************ 00:06:17.706 17:15:37 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:17.706 17:15:37 -- common/autotest_common.sh@650 -- # local es=0 00:06:17.706 17:15:37 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:17.706 17:15:37 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:17.706 17:15:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.706 17:15:37 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:17.706 17:15:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.706 17:15:37 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:17.706 17:15:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:17.706 17:15:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.706 17:15:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.706 17:15:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.706 17:15:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.706 17:15:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.706 17:15:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.706 17:15:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.706 17:15:37 -- accel/accel.sh@42 -- # jq -r . 00:06:17.706 [2024-11-09 17:15:37.287704] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:17.706 [2024-11-09 17:15:37.287801] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid354227 ] 00:06:17.706 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.706 [2024-11-09 17:15:37.358013] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.706 [2024-11-09 17:15:37.427622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.706 [2024-11-09 17:15:37.467257] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.966 [2024-11-09 17:15:37.527563] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:17.966 A filename is required. 00:06:17.966 17:15:37 -- common/autotest_common.sh@653 -- # es=234 00:06:17.966 17:15:37 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.966 17:15:37 -- common/autotest_common.sh@662 -- # es=106 00:06:17.966 17:15:37 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:17.966 17:15:37 -- common/autotest_common.sh@670 -- # es=1 00:06:17.966 17:15:37 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.966 00:06:17.966 real 0m0.330s 00:06:17.966 user 0m0.247s 00:06:17.966 sys 0m0.121s 00:06:17.966 17:15:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:17.966 17:15:37 -- common/autotest_common.sh@10 -- # set +x 00:06:17.966 ************************************ 00:06:17.966 END TEST accel_missing_filename 00:06:17.966 ************************************ 00:06:17.966 17:15:37 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:17.966 17:15:37 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:17.966 17:15:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.966 17:15:37 -- common/autotest_common.sh@10 -- # set +x 00:06:17.966 ************************************ 00:06:17.966 START TEST accel_compress_verify 00:06:17.966 ************************************ 00:06:17.966 17:15:37 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:17.966 17:15:37 -- common/autotest_common.sh@650 -- # local es=0 00:06:17.966 17:15:37 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:17.966 17:15:37 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:17.966 17:15:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.966 17:15:37 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:17.966 17:15:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.966 17:15:37 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:17.966 17:15:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:17.966 17:15:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.966 17:15:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.966 17:15:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.966 17:15:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.966 17:15:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.966 17:15:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.966 17:15:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.966 17:15:37 -- accel/accel.sh@42 -- # jq -r . 00:06:17.966 [2024-11-09 17:15:37.668760] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:17.966 [2024-11-09 17:15:37.668849] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid354430 ] 00:06:17.966 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.226 [2024-11-09 17:15:37.741069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.226 [2024-11-09 17:15:37.809139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.226 [2024-11-09 17:15:37.848715] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:18.226 [2024-11-09 17:15:37.908358] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:18.226 00:06:18.226 Compression does not support the verify option, aborting. 00:06:18.226 17:15:37 -- common/autotest_common.sh@653 -- # es=161 00:06:18.226 17:15:37 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:18.226 17:15:37 -- common/autotest_common.sh@662 -- # es=33 00:06:18.226 17:15:37 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:18.226 17:15:37 -- common/autotest_common.sh@670 -- # es=1 00:06:18.226 17:15:37 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:18.226 00:06:18.226 real 0m0.332s 00:06:18.226 user 0m0.230s 00:06:18.226 sys 0m0.142s 00:06:18.226 17:15:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:18.226 17:15:37 -- common/autotest_common.sh@10 -- # set +x 00:06:18.226 ************************************ 00:06:18.226 END TEST accel_compress_verify 00:06:18.226 ************************************ 00:06:18.486 17:15:38 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:18.486 17:15:38 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:18.486 17:15:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.486 17:15:38 -- common/autotest_common.sh@10 -- # set +x 00:06:18.486 ************************************ 00:06:18.486 START TEST accel_wrong_workload 00:06:18.486 ************************************ 00:06:18.486 17:15:38 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:18.486 17:15:38 -- common/autotest_common.sh@650 -- # local es=0 00:06:18.486 17:15:38 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:18.486 17:15:38 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:18.486 17:15:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.486 17:15:38 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:18.486 17:15:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.486 17:15:38 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:18.486 17:15:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:18.486 17:15:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.486 17:15:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.486 17:15:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.486 17:15:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.486 17:15:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.486 17:15:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.486 17:15:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.486 17:15:38 -- accel/accel.sh@42 -- # jq -r . 00:06:18.486 Unsupported workload type: foobar 00:06:18.486 [2024-11-09 17:15:38.039819] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:18.486 accel_perf options: 00:06:18.486 [-h help message] 00:06:18.486 [-q queue depth per core] 00:06:18.486 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:18.486 [-T number of threads per core 00:06:18.486 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:18.486 [-t time in seconds] 00:06:18.486 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:18.486 [ dif_verify, , dif_generate, dif_generate_copy 00:06:18.486 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:18.486 [-l for compress/decompress workloads, name of uncompressed input file 00:06:18.486 [-S for crc32c workload, use this seed value (default 0) 00:06:18.486 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:18.486 [-f for fill workload, use this BYTE value (default 255) 00:06:18.486 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:18.486 [-y verify result if this switch is on] 00:06:18.486 [-a tasks to allocate per core (default: same value as -q)] 00:06:18.486 Can be used to spread operations across a wider range of memory. 00:06:18.486 17:15:38 -- common/autotest_common.sh@653 -- # es=1 00:06:18.486 17:15:38 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:18.486 17:15:38 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:18.486 17:15:38 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:18.486 00:06:18.486 real 0m0.020s 00:06:18.486 user 0m0.009s 00:06:18.486 sys 0m0.011s 00:06:18.486 17:15:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:18.486 17:15:38 -- common/autotest_common.sh@10 -- # set +x 00:06:18.486 ************************************ 00:06:18.486 END TEST accel_wrong_workload 00:06:18.486 ************************************ 00:06:18.486 Error: writing output failed: Broken pipe 00:06:18.486 17:15:38 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:18.486 17:15:38 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:18.486 17:15:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.486 17:15:38 -- common/autotest_common.sh@10 -- # set +x 00:06:18.486 ************************************ 00:06:18.486 START TEST accel_negative_buffers 00:06:18.486 ************************************ 00:06:18.486 17:15:38 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:18.486 17:15:38 -- common/autotest_common.sh@650 -- # local es=0 00:06:18.486 17:15:38 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:18.486 17:15:38 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:18.486 17:15:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.486 17:15:38 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:18.486 17:15:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.486 17:15:38 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:18.486 17:15:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:18.486 17:15:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.486 17:15:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.486 17:15:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.486 17:15:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.486 17:15:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.486 17:15:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.486 17:15:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.486 17:15:38 -- accel/accel.sh@42 -- # jq -r . 00:06:18.486 -x option must be non-negative. 00:06:18.486 [2024-11-09 17:15:38.099178] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:18.486 accel_perf options: 00:06:18.486 [-h help message] 00:06:18.486 [-q queue depth per core] 00:06:18.486 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:18.486 [-T number of threads per core 00:06:18.486 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:18.486 [-t time in seconds] 00:06:18.486 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:18.486 [ dif_verify, , dif_generate, dif_generate_copy 00:06:18.486 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:18.486 [-l for compress/decompress workloads, name of uncompressed input file 00:06:18.486 [-S for crc32c workload, use this seed value (default 0) 00:06:18.486 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:18.486 [-f for fill workload, use this BYTE value (default 255) 00:06:18.486 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:18.486 [-y verify result if this switch is on] 00:06:18.486 [-a tasks to allocate per core (default: same value as -q)] 00:06:18.486 Can be used to spread operations across a wider range of memory. 00:06:18.486 17:15:38 -- common/autotest_common.sh@653 -- # es=1 00:06:18.486 17:15:38 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:18.486 17:15:38 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:18.486 17:15:38 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:18.486 00:06:18.486 real 0m0.020s 00:06:18.486 user 0m0.008s 00:06:18.486 sys 0m0.012s 00:06:18.486 17:15:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:18.486 17:15:38 -- common/autotest_common.sh@10 -- # set +x 00:06:18.486 ************************************ 00:06:18.486 END TEST accel_negative_buffers 00:06:18.486 ************************************ 00:06:18.486 Error: writing output failed: Broken pipe 00:06:18.486 17:15:38 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:18.486 17:15:38 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:18.486 17:15:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.486 17:15:38 -- common/autotest_common.sh@10 -- # set +x 00:06:18.486 ************************************ 00:06:18.487 START TEST accel_crc32c 00:06:18.487 ************************************ 00:06:18.487 17:15:38 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:18.487 17:15:38 -- accel/accel.sh@16 -- # local accel_opc 00:06:18.487 17:15:38 -- accel/accel.sh@17 -- # local accel_module 00:06:18.487 17:15:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:18.487 17:15:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:18.487 17:15:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.487 17:15:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.487 17:15:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.487 17:15:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.487 17:15:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.487 17:15:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.487 17:15:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.487 17:15:38 -- accel/accel.sh@42 -- # jq -r . 00:06:18.487 [2024-11-09 17:15:38.170668] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:18.487 [2024-11-09 17:15:38.170758] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid354496 ] 00:06:18.487 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.487 [2024-11-09 17:15:38.238677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.746 [2024-11-09 17:15:38.309020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.126 17:15:39 -- accel/accel.sh@18 -- # out=' 00:06:20.126 SPDK Configuration: 00:06:20.126 Core mask: 0x1 00:06:20.126 00:06:20.126 Accel Perf Configuration: 00:06:20.126 Workload Type: crc32c 00:06:20.126 CRC-32C seed: 32 00:06:20.126 Transfer size: 4096 bytes 00:06:20.126 Vector count 1 00:06:20.126 Module: software 00:06:20.126 Queue depth: 32 00:06:20.126 Allocate depth: 32 00:06:20.126 # threads/core: 1 00:06:20.126 Run time: 1 seconds 00:06:20.126 Verify: Yes 00:06:20.126 00:06:20.126 Running for 1 seconds... 00:06:20.126 00:06:20.126 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:20.126 ------------------------------------------------------------------------------------ 00:06:20.126 0,0 845632/s 3303 MiB/s 0 0 00:06:20.126 ==================================================================================== 00:06:20.126 Total 845632/s 3303 MiB/s 0 0' 00:06:20.126 17:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.126 17:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.126 17:15:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:20.126 17:15:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:20.126 17:15:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.126 17:15:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.126 17:15:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.126 17:15:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.126 17:15:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.126 17:15:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.126 17:15:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.126 17:15:39 -- accel/accel.sh@42 -- # jq -r . 00:06:20.126 [2024-11-09 17:15:39.497908] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.126 [2024-11-09 17:15:39.497990] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid354769 ] 00:06:20.126 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.126 [2024-11-09 17:15:39.565466] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.126 [2024-11-09 17:15:39.631908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.126 17:15:39 -- accel/accel.sh@21 -- # val= 00:06:20.126 17:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.126 17:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.126 17:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.126 17:15:39 -- accel/accel.sh@21 -- # val= 00:06:20.126 17:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.126 17:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.126 17:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.126 17:15:39 -- accel/accel.sh@21 -- # val=0x1 00:06:20.126 17:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.126 17:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.126 17:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.126 17:15:39 -- accel/accel.sh@21 -- # val= 00:06:20.126 17:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.126 17:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.126 17:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.126 17:15:39 -- accel/accel.sh@21 -- # val= 00:06:20.126 17:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.126 17:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.126 17:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.126 17:15:39 -- accel/accel.sh@21 -- # val=crc32c 00:06:20.126 17:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.126 17:15:39 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:20.126 17:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.126 17:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.126 17:15:39 -- accel/accel.sh@21 -- # val=32 00:06:20.126 17:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.126 17:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.126 17:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.126 17:15:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:20.126 17:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.126 17:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.126 17:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.126 17:15:39 -- accel/accel.sh@21 -- # val= 00:06:20.126 17:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.126 17:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.126 17:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.126 17:15:39 -- accel/accel.sh@21 -- # val=software 00:06:20.126 17:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.126 17:15:39 -- accel/accel.sh@23 -- # accel_module=software 00:06:20.126 17:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.127 17:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.127 17:15:39 -- accel/accel.sh@21 -- # val=32 00:06:20.127 17:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.127 17:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.127 17:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.127 17:15:39 -- accel/accel.sh@21 -- # val=32 00:06:20.127 17:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.127 17:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.127 17:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.127 17:15:39 -- accel/accel.sh@21 -- # val=1 00:06:20.127 17:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.127 17:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.127 17:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.127 17:15:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:20.127 17:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.127 17:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.127 17:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.127 17:15:39 -- accel/accel.sh@21 -- # val=Yes 00:06:20.127 17:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.127 17:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.127 17:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.127 17:15:39 -- accel/accel.sh@21 -- # val= 00:06:20.127 17:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.127 17:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.127 17:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.127 17:15:39 -- accel/accel.sh@21 -- # val= 00:06:20.127 17:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.127 17:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.127 17:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:21.065 17:15:40 -- accel/accel.sh@21 -- # val= 00:06:21.065 17:15:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.065 17:15:40 -- accel/accel.sh@20 -- # IFS=: 00:06:21.065 17:15:40 -- accel/accel.sh@20 -- # read -r var val 00:06:21.065 17:15:40 -- accel/accel.sh@21 -- # val= 00:06:21.065 17:15:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.065 17:15:40 -- accel/accel.sh@20 -- # IFS=: 00:06:21.065 17:15:40 -- accel/accel.sh@20 -- # read -r var val 00:06:21.065 17:15:40 -- accel/accel.sh@21 -- # val= 00:06:21.065 17:15:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.065 17:15:40 -- accel/accel.sh@20 -- # IFS=: 00:06:21.065 17:15:40 -- accel/accel.sh@20 -- # read -r var val 00:06:21.065 17:15:40 -- accel/accel.sh@21 -- # val= 00:06:21.065 17:15:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.065 17:15:40 -- accel/accel.sh@20 -- # IFS=: 00:06:21.065 17:15:40 -- accel/accel.sh@20 -- # read -r var val 00:06:21.065 17:15:40 -- accel/accel.sh@21 -- # val= 00:06:21.065 17:15:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.065 17:15:40 -- accel/accel.sh@20 -- # IFS=: 00:06:21.065 17:15:40 -- accel/accel.sh@20 -- # read -r var val 00:06:21.065 17:15:40 -- accel/accel.sh@21 -- # val= 00:06:21.065 17:15:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.065 17:15:40 -- accel/accel.sh@20 -- # IFS=: 00:06:21.065 17:15:40 -- accel/accel.sh@20 -- # read -r var val 00:06:21.065 17:15:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:21.065 17:15:40 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:21.065 17:15:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.065 00:06:21.065 real 0m2.658s 00:06:21.065 user 0m2.401s 00:06:21.065 sys 0m0.267s 00:06:21.065 17:15:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:21.065 17:15:40 -- common/autotest_common.sh@10 -- # set +x 00:06:21.065 ************************************ 00:06:21.065 END TEST accel_crc32c 00:06:21.065 ************************************ 00:06:21.325 17:15:40 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:21.325 17:15:40 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:21.325 17:15:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.325 17:15:40 -- common/autotest_common.sh@10 -- # set +x 00:06:21.325 ************************************ 00:06:21.325 START TEST accel_crc32c_C2 00:06:21.325 ************************************ 00:06:21.325 17:15:40 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:21.325 17:15:40 -- accel/accel.sh@16 -- # local accel_opc 00:06:21.325 17:15:40 -- accel/accel.sh@17 -- # local accel_module 00:06:21.325 17:15:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:21.325 17:15:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:21.325 17:15:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.325 17:15:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.325 17:15:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.325 17:15:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.325 17:15:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.325 17:15:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.325 17:15:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.325 17:15:40 -- accel/accel.sh@42 -- # jq -r . 00:06:21.325 [2024-11-09 17:15:40.879058] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:21.325 [2024-11-09 17:15:40.879150] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid355051 ] 00:06:21.325 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.325 [2024-11-09 17:15:40.949588] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.325 [2024-11-09 17:15:41.017533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.704 17:15:42 -- accel/accel.sh@18 -- # out=' 00:06:22.704 SPDK Configuration: 00:06:22.704 Core mask: 0x1 00:06:22.704 00:06:22.704 Accel Perf Configuration: 00:06:22.704 Workload Type: crc32c 00:06:22.704 CRC-32C seed: 0 00:06:22.704 Transfer size: 4096 bytes 00:06:22.704 Vector count 2 00:06:22.704 Module: software 00:06:22.704 Queue depth: 32 00:06:22.704 Allocate depth: 32 00:06:22.704 # threads/core: 1 00:06:22.704 Run time: 1 seconds 00:06:22.704 Verify: Yes 00:06:22.704 00:06:22.704 Running for 1 seconds... 00:06:22.704 00:06:22.704 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:22.704 ------------------------------------------------------------------------------------ 00:06:22.704 0,0 611680/s 4778 MiB/s 0 0 00:06:22.704 ==================================================================================== 00:06:22.704 Total 611680/s 2389 MiB/s 0 0' 00:06:22.704 17:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:22.704 17:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:22.704 17:15:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:22.704 17:15:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:22.704 17:15:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.704 17:15:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.704 17:15:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.704 17:15:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.704 17:15:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.704 17:15:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.705 17:15:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.705 17:15:42 -- accel/accel.sh@42 -- # jq -r . 00:06:22.705 [2024-11-09 17:15:42.210055] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.705 [2024-11-09 17:15:42.210147] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid355269 ] 00:06:22.705 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.705 [2024-11-09 17:15:42.279864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.705 [2024-11-09 17:15:42.346270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.705 17:15:42 -- accel/accel.sh@21 -- # val= 00:06:22.705 17:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:22.705 17:15:42 -- accel/accel.sh@21 -- # val= 00:06:22.705 17:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:22.705 17:15:42 -- accel/accel.sh@21 -- # val=0x1 00:06:22.705 17:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:22.705 17:15:42 -- accel/accel.sh@21 -- # val= 00:06:22.705 17:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:22.705 17:15:42 -- accel/accel.sh@21 -- # val= 00:06:22.705 17:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:22.705 17:15:42 -- accel/accel.sh@21 -- # val=crc32c 00:06:22.705 17:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.705 17:15:42 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:22.705 17:15:42 -- accel/accel.sh@21 -- # val=0 00:06:22.705 17:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:22.705 17:15:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:22.705 17:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:22.705 17:15:42 -- accel/accel.sh@21 -- # val= 00:06:22.705 17:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:22.705 17:15:42 -- accel/accel.sh@21 -- # val=software 00:06:22.705 17:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.705 17:15:42 -- accel/accel.sh@23 -- # accel_module=software 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:22.705 17:15:42 -- accel/accel.sh@21 -- # val=32 00:06:22.705 17:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:22.705 17:15:42 -- accel/accel.sh@21 -- # val=32 00:06:22.705 17:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:22.705 17:15:42 -- accel/accel.sh@21 -- # val=1 00:06:22.705 17:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:22.705 17:15:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:22.705 17:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:22.705 17:15:42 -- accel/accel.sh@21 -- # val=Yes 00:06:22.705 17:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:22.705 17:15:42 -- accel/accel.sh@21 -- # val= 00:06:22.705 17:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:22.705 17:15:42 -- accel/accel.sh@21 -- # val= 00:06:22.705 17:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:22.705 17:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:24.083 17:15:43 -- accel/accel.sh@21 -- # val= 00:06:24.083 17:15:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.083 17:15:43 -- accel/accel.sh@20 -- # IFS=: 00:06:24.083 17:15:43 -- accel/accel.sh@20 -- # read -r var val 00:06:24.083 17:15:43 -- accel/accel.sh@21 -- # val= 00:06:24.083 17:15:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.083 17:15:43 -- accel/accel.sh@20 -- # IFS=: 00:06:24.083 17:15:43 -- accel/accel.sh@20 -- # read -r var val 00:06:24.083 17:15:43 -- accel/accel.sh@21 -- # val= 00:06:24.083 17:15:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.083 17:15:43 -- accel/accel.sh@20 -- # IFS=: 00:06:24.083 17:15:43 -- accel/accel.sh@20 -- # read -r var val 00:06:24.083 17:15:43 -- accel/accel.sh@21 -- # val= 00:06:24.083 17:15:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.083 17:15:43 -- accel/accel.sh@20 -- # IFS=: 00:06:24.083 17:15:43 -- accel/accel.sh@20 -- # read -r var val 00:06:24.083 17:15:43 -- accel/accel.sh@21 -- # val= 00:06:24.083 17:15:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.083 17:15:43 -- accel/accel.sh@20 -- # IFS=: 00:06:24.083 17:15:43 -- accel/accel.sh@20 -- # read -r var val 00:06:24.083 17:15:43 -- accel/accel.sh@21 -- # val= 00:06:24.083 17:15:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.083 17:15:43 -- accel/accel.sh@20 -- # IFS=: 00:06:24.083 17:15:43 -- accel/accel.sh@20 -- # read -r var val 00:06:24.083 17:15:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:24.083 17:15:43 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:24.083 17:15:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.083 00:06:24.083 real 0m2.665s 00:06:24.083 user 0m2.425s 00:06:24.083 sys 0m0.249s 00:06:24.083 17:15:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.083 17:15:43 -- common/autotest_common.sh@10 -- # set +x 00:06:24.083 ************************************ 00:06:24.083 END TEST accel_crc32c_C2 00:06:24.083 ************************************ 00:06:24.083 17:15:43 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:24.083 17:15:43 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:24.083 17:15:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.083 17:15:43 -- common/autotest_common.sh@10 -- # set +x 00:06:24.083 ************************************ 00:06:24.083 START TEST accel_copy 00:06:24.083 ************************************ 00:06:24.083 17:15:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:24.083 17:15:43 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.083 17:15:43 -- accel/accel.sh@17 -- # local accel_module 00:06:24.083 17:15:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:24.083 17:15:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:24.083 17:15:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.084 17:15:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.084 17:15:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.084 17:15:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.084 17:15:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.084 17:15:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.084 17:15:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.084 17:15:43 -- accel/accel.sh@42 -- # jq -r . 00:06:24.084 [2024-11-09 17:15:43.594446] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.084 [2024-11-09 17:15:43.594545] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid355480 ] 00:06:24.084 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.084 [2024-11-09 17:15:43.665241] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.084 [2024-11-09 17:15:43.733677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.463 17:15:44 -- accel/accel.sh@18 -- # out=' 00:06:25.463 SPDK Configuration: 00:06:25.463 Core mask: 0x1 00:06:25.463 00:06:25.463 Accel Perf Configuration: 00:06:25.463 Workload Type: copy 00:06:25.463 Transfer size: 4096 bytes 00:06:25.463 Vector count 1 00:06:25.463 Module: software 00:06:25.463 Queue depth: 32 00:06:25.463 Allocate depth: 32 00:06:25.463 # threads/core: 1 00:06:25.463 Run time: 1 seconds 00:06:25.463 Verify: Yes 00:06:25.463 00:06:25.463 Running for 1 seconds... 00:06:25.463 00:06:25.463 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:25.463 ------------------------------------------------------------------------------------ 00:06:25.463 0,0 558368/s 2181 MiB/s 0 0 00:06:25.463 ==================================================================================== 00:06:25.463 Total 558368/s 2181 MiB/s 0 0' 00:06:25.463 17:15:44 -- accel/accel.sh@20 -- # IFS=: 00:06:25.463 17:15:44 -- accel/accel.sh@20 -- # read -r var val 00:06:25.463 17:15:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:25.463 17:15:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:25.463 17:15:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.463 17:15:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.463 17:15:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.463 17:15:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.463 17:15:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.463 17:15:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.463 17:15:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.463 17:15:44 -- accel/accel.sh@42 -- # jq -r . 00:06:25.463 [2024-11-09 17:15:44.923632] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.463 [2024-11-09 17:15:44.923725] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid355644 ] 00:06:25.463 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.463 [2024-11-09 17:15:44.991359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.463 [2024-11-09 17:15:45.058400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.463 17:15:45 -- accel/accel.sh@21 -- # val= 00:06:25.463 17:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:25.463 17:15:45 -- accel/accel.sh@21 -- # val= 00:06:25.463 17:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:25.463 17:15:45 -- accel/accel.sh@21 -- # val=0x1 00:06:25.463 17:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:25.463 17:15:45 -- accel/accel.sh@21 -- # val= 00:06:25.463 17:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:25.463 17:15:45 -- accel/accel.sh@21 -- # val= 00:06:25.463 17:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:25.463 17:15:45 -- accel/accel.sh@21 -- # val=copy 00:06:25.463 17:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.463 17:15:45 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:25.463 17:15:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:25.463 17:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:25.463 17:15:45 -- accel/accel.sh@21 -- # val= 00:06:25.463 17:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:25.463 17:15:45 -- accel/accel.sh@21 -- # val=software 00:06:25.463 17:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.463 17:15:45 -- accel/accel.sh@23 -- # accel_module=software 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:25.463 17:15:45 -- accel/accel.sh@21 -- # val=32 00:06:25.463 17:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:25.463 17:15:45 -- accel/accel.sh@21 -- # val=32 00:06:25.463 17:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:25.463 17:15:45 -- accel/accel.sh@21 -- # val=1 00:06:25.463 17:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:25.463 17:15:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:25.463 17:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:25.463 17:15:45 -- accel/accel.sh@21 -- # val=Yes 00:06:25.463 17:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:25.463 17:15:45 -- accel/accel.sh@21 -- # val= 00:06:25.463 17:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:25.463 17:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:25.463 17:15:45 -- accel/accel.sh@21 -- # val= 00:06:25.464 17:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.464 17:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:25.464 17:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:26.843 17:15:46 -- accel/accel.sh@21 -- # val= 00:06:26.843 17:15:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.843 17:15:46 -- accel/accel.sh@20 -- # IFS=: 00:06:26.843 17:15:46 -- accel/accel.sh@20 -- # read -r var val 00:06:26.843 17:15:46 -- accel/accel.sh@21 -- # val= 00:06:26.843 17:15:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.843 17:15:46 -- accel/accel.sh@20 -- # IFS=: 00:06:26.843 17:15:46 -- accel/accel.sh@20 -- # read -r var val 00:06:26.843 17:15:46 -- accel/accel.sh@21 -- # val= 00:06:26.843 17:15:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.843 17:15:46 -- accel/accel.sh@20 -- # IFS=: 00:06:26.843 17:15:46 -- accel/accel.sh@20 -- # read -r var val 00:06:26.843 17:15:46 -- accel/accel.sh@21 -- # val= 00:06:26.843 17:15:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.843 17:15:46 -- accel/accel.sh@20 -- # IFS=: 00:06:26.843 17:15:46 -- accel/accel.sh@20 -- # read -r var val 00:06:26.843 17:15:46 -- accel/accel.sh@21 -- # val= 00:06:26.843 17:15:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.843 17:15:46 -- accel/accel.sh@20 -- # IFS=: 00:06:26.843 17:15:46 -- accel/accel.sh@20 -- # read -r var val 00:06:26.843 17:15:46 -- accel/accel.sh@21 -- # val= 00:06:26.843 17:15:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.843 17:15:46 -- accel/accel.sh@20 -- # IFS=: 00:06:26.843 17:15:46 -- accel/accel.sh@20 -- # read -r var val 00:06:26.843 17:15:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:26.843 17:15:46 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:26.843 17:15:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.843 00:06:26.843 real 0m2.662s 00:06:26.843 user 0m2.427s 00:06:26.843 sys 0m0.243s 00:06:26.843 17:15:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:26.843 17:15:46 -- common/autotest_common.sh@10 -- # set +x 00:06:26.843 ************************************ 00:06:26.843 END TEST accel_copy 00:06:26.843 ************************************ 00:06:26.843 17:15:46 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:26.843 17:15:46 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:26.843 17:15:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.843 17:15:46 -- common/autotest_common.sh@10 -- # set +x 00:06:26.843 ************************************ 00:06:26.843 START TEST accel_fill 00:06:26.843 ************************************ 00:06:26.843 17:15:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:26.843 17:15:46 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.843 17:15:46 -- accel/accel.sh@17 -- # local accel_module 00:06:26.843 17:15:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:26.843 17:15:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:26.843 17:15:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.844 17:15:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.844 17:15:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.844 17:15:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.844 17:15:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.844 17:15:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.844 17:15:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.844 17:15:46 -- accel/accel.sh@42 -- # jq -r . 00:06:26.844 [2024-11-09 17:15:46.304037] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.844 [2024-11-09 17:15:46.304126] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid355915 ] 00:06:26.844 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.844 [2024-11-09 17:15:46.374112] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.844 [2024-11-09 17:15:46.443012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.293 17:15:47 -- accel/accel.sh@18 -- # out=' 00:06:28.293 SPDK Configuration: 00:06:28.293 Core mask: 0x1 00:06:28.293 00:06:28.293 Accel Perf Configuration: 00:06:28.293 Workload Type: fill 00:06:28.293 Fill pattern: 0x80 00:06:28.293 Transfer size: 4096 bytes 00:06:28.293 Vector count 1 00:06:28.293 Module: software 00:06:28.293 Queue depth: 64 00:06:28.293 Allocate depth: 64 00:06:28.293 # threads/core: 1 00:06:28.293 Run time: 1 seconds 00:06:28.293 Verify: Yes 00:06:28.293 00:06:28.293 Running for 1 seconds... 00:06:28.293 00:06:28.293 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:28.293 ------------------------------------------------------------------------------------ 00:06:28.293 0,0 963008/s 3761 MiB/s 0 0 00:06:28.293 ==================================================================================== 00:06:28.293 Total 963008/s 3761 MiB/s 0 0' 00:06:28.293 17:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:28.293 17:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:28.293 17:15:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:28.293 17:15:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:28.293 17:15:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.293 17:15:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.293 17:15:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.293 17:15:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.293 17:15:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.293 17:15:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.293 17:15:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.293 17:15:47 -- accel/accel.sh@42 -- # jq -r . 00:06:28.293 [2024-11-09 17:15:47.635155] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:28.293 [2024-11-09 17:15:47.635268] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid356190 ] 00:06:28.293 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.293 [2024-11-09 17:15:47.705561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.293 [2024-11-09 17:15:47.771827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.293 17:15:47 -- accel/accel.sh@21 -- # val= 00:06:28.293 17:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.293 17:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:28.293 17:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:28.293 17:15:47 -- accel/accel.sh@21 -- # val= 00:06:28.293 17:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.293 17:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:28.293 17:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:28.293 17:15:47 -- accel/accel.sh@21 -- # val=0x1 00:06:28.293 17:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.293 17:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:28.293 17:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:28.293 17:15:47 -- accel/accel.sh@21 -- # val= 00:06:28.293 17:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.293 17:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:28.293 17:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:28.293 17:15:47 -- accel/accel.sh@21 -- # val= 00:06:28.293 17:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.293 17:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:28.293 17:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:28.293 17:15:47 -- accel/accel.sh@21 -- # val=fill 00:06:28.293 17:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.293 17:15:47 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:28.293 17:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:28.293 17:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:28.293 17:15:47 -- accel/accel.sh@21 -- # val=0x80 00:06:28.293 17:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.293 17:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:28.293 17:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:28.293 17:15:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:28.293 17:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.293 17:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:28.293 17:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:28.293 17:15:47 -- accel/accel.sh@21 -- # val= 00:06:28.293 17:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.293 17:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:28.293 17:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:28.293 17:15:47 -- accel/accel.sh@21 -- # val=software 00:06:28.294 17:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.294 17:15:47 -- accel/accel.sh@23 -- # accel_module=software 00:06:28.294 17:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:28.294 17:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:28.294 17:15:47 -- accel/accel.sh@21 -- # val=64 00:06:28.294 17:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.294 17:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:28.294 17:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:28.294 17:15:47 -- accel/accel.sh@21 -- # val=64 00:06:28.294 17:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.294 17:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:28.294 17:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:28.294 17:15:47 -- accel/accel.sh@21 -- # val=1 00:06:28.294 17:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.294 17:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:28.294 17:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:28.294 17:15:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:28.294 17:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.294 17:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:28.294 17:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:28.294 17:15:47 -- accel/accel.sh@21 -- # val=Yes 00:06:28.294 17:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.294 17:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:28.294 17:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:28.294 17:15:47 -- accel/accel.sh@21 -- # val= 00:06:28.294 17:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.294 17:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:28.294 17:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:28.294 17:15:47 -- accel/accel.sh@21 -- # val= 00:06:28.294 17:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.294 17:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:28.294 17:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:29.230 17:15:48 -- accel/accel.sh@21 -- # val= 00:06:29.230 17:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.230 17:15:48 -- accel/accel.sh@20 -- # IFS=: 00:06:29.230 17:15:48 -- accel/accel.sh@20 -- # read -r var val 00:06:29.230 17:15:48 -- accel/accel.sh@21 -- # val= 00:06:29.230 17:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.230 17:15:48 -- accel/accel.sh@20 -- # IFS=: 00:06:29.230 17:15:48 -- accel/accel.sh@20 -- # read -r var val 00:06:29.230 17:15:48 -- accel/accel.sh@21 -- # val= 00:06:29.230 17:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.230 17:15:48 -- accel/accel.sh@20 -- # IFS=: 00:06:29.230 17:15:48 -- accel/accel.sh@20 -- # read -r var val 00:06:29.230 17:15:48 -- accel/accel.sh@21 -- # val= 00:06:29.230 17:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.230 17:15:48 -- accel/accel.sh@20 -- # IFS=: 00:06:29.230 17:15:48 -- accel/accel.sh@20 -- # read -r var val 00:06:29.230 17:15:48 -- accel/accel.sh@21 -- # val= 00:06:29.230 17:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.230 17:15:48 -- accel/accel.sh@20 -- # IFS=: 00:06:29.230 17:15:48 -- accel/accel.sh@20 -- # read -r var val 00:06:29.230 17:15:48 -- accel/accel.sh@21 -- # val= 00:06:29.230 17:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.230 17:15:48 -- accel/accel.sh@20 -- # IFS=: 00:06:29.230 17:15:48 -- accel/accel.sh@20 -- # read -r var val 00:06:29.230 17:15:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:29.230 17:15:48 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:29.230 17:15:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.230 00:06:29.230 real 0m2.665s 00:06:29.230 user 0m2.426s 00:06:29.230 sys 0m0.246s 00:06:29.230 17:15:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.230 17:15:48 -- common/autotest_common.sh@10 -- # set +x 00:06:29.230 ************************************ 00:06:29.230 END TEST accel_fill 00:06:29.230 ************************************ 00:06:29.230 17:15:48 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:29.230 17:15:48 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:29.230 17:15:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.230 17:15:48 -- common/autotest_common.sh@10 -- # set +x 00:06:29.230 ************************************ 00:06:29.230 START TEST accel_copy_crc32c 00:06:29.230 ************************************ 00:06:29.230 17:15:48 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:29.230 17:15:48 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.230 17:15:48 -- accel/accel.sh@17 -- # local accel_module 00:06:29.230 17:15:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:29.490 17:15:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:29.490 17:15:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.490 17:15:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.490 17:15:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.490 17:15:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.490 17:15:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.490 17:15:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.490 17:15:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.490 17:15:49 -- accel/accel.sh@42 -- # jq -r . 00:06:29.490 [2024-11-09 17:15:49.017609] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.490 [2024-11-09 17:15:49.017690] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid356474 ] 00:06:29.490 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.490 [2024-11-09 17:15:49.086101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.490 [2024-11-09 17:15:49.153473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.876 17:15:50 -- accel/accel.sh@18 -- # out=' 00:06:30.876 SPDK Configuration: 00:06:30.876 Core mask: 0x1 00:06:30.876 00:06:30.876 Accel Perf Configuration: 00:06:30.876 Workload Type: copy_crc32c 00:06:30.876 CRC-32C seed: 0 00:06:30.876 Vector size: 4096 bytes 00:06:30.876 Transfer size: 4096 bytes 00:06:30.876 Vector count 1 00:06:30.876 Module: software 00:06:30.876 Queue depth: 32 00:06:30.876 Allocate depth: 32 00:06:30.876 # threads/core: 1 00:06:30.876 Run time: 1 seconds 00:06:30.876 Verify: Yes 00:06:30.876 00:06:30.876 Running for 1 seconds... 00:06:30.876 00:06:30.876 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:30.876 ------------------------------------------------------------------------------------ 00:06:30.876 0,0 426752/s 1667 MiB/s 0 0 00:06:30.876 ==================================================================================== 00:06:30.876 Total 426752/s 1667 MiB/s 0 0' 00:06:30.876 17:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:30.876 17:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:30.876 17:15:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:30.876 17:15:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:30.876 17:15:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.876 17:15:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.876 17:15:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.876 17:15:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.876 17:15:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.876 17:15:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.876 17:15:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.876 17:15:50 -- accel/accel.sh@42 -- # jq -r . 00:06:30.876 [2024-11-09 17:15:50.346126] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.876 [2024-11-09 17:15:50.346216] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid356740 ] 00:06:30.876 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.876 [2024-11-09 17:15:50.416466] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.876 [2024-11-09 17:15:50.485334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.876 17:15:50 -- accel/accel.sh@21 -- # val= 00:06:30.876 17:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.876 17:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:30.876 17:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:30.876 17:15:50 -- accel/accel.sh@21 -- # val= 00:06:30.876 17:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.876 17:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:30.876 17:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:30.876 17:15:50 -- accel/accel.sh@21 -- # val=0x1 00:06:30.876 17:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.876 17:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:30.876 17:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:30.876 17:15:50 -- accel/accel.sh@21 -- # val= 00:06:30.876 17:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.876 17:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:30.876 17:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:30.876 17:15:50 -- accel/accel.sh@21 -- # val= 00:06:30.876 17:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.876 17:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:30.876 17:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:30.876 17:15:50 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:30.876 17:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.876 17:15:50 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:30.876 17:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:30.876 17:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:30.876 17:15:50 -- accel/accel.sh@21 -- # val=0 00:06:30.876 17:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.876 17:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:30.876 17:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:30.876 17:15:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:30.876 17:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.877 17:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:30.877 17:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:30.877 17:15:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:30.877 17:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.877 17:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:30.877 17:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:30.877 17:15:50 -- accel/accel.sh@21 -- # val= 00:06:30.877 17:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.877 17:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:30.877 17:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:30.877 17:15:50 -- accel/accel.sh@21 -- # val=software 00:06:30.877 17:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.877 17:15:50 -- accel/accel.sh@23 -- # accel_module=software 00:06:30.877 17:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:30.877 17:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:30.877 17:15:50 -- accel/accel.sh@21 -- # val=32 00:06:30.877 17:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.877 17:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:30.877 17:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:30.877 17:15:50 -- accel/accel.sh@21 -- # val=32 00:06:30.877 17:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.877 17:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:30.877 17:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:30.877 17:15:50 -- accel/accel.sh@21 -- # val=1 00:06:30.877 17:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.877 17:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:30.877 17:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:30.877 17:15:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:30.877 17:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.877 17:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:30.877 17:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:30.877 17:15:50 -- accel/accel.sh@21 -- # val=Yes 00:06:30.877 17:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.877 17:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:30.877 17:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:30.877 17:15:50 -- accel/accel.sh@21 -- # val= 00:06:30.877 17:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.877 17:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:30.877 17:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:30.877 17:15:50 -- accel/accel.sh@21 -- # val= 00:06:30.877 17:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.877 17:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:30.877 17:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:32.256 17:15:51 -- accel/accel.sh@21 -- # val= 00:06:32.256 17:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.256 17:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:32.256 17:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:32.256 17:15:51 -- accel/accel.sh@21 -- # val= 00:06:32.256 17:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.256 17:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:32.256 17:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:32.256 17:15:51 -- accel/accel.sh@21 -- # val= 00:06:32.256 17:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.256 17:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:32.256 17:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:32.256 17:15:51 -- accel/accel.sh@21 -- # val= 00:06:32.256 17:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.256 17:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:32.256 17:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:32.256 17:15:51 -- accel/accel.sh@21 -- # val= 00:06:32.256 17:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.256 17:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:32.256 17:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:32.256 17:15:51 -- accel/accel.sh@21 -- # val= 00:06:32.256 17:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.256 17:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:32.256 17:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:32.256 17:15:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:32.256 17:15:51 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:32.256 17:15:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.256 00:06:32.256 real 0m2.666s 00:06:32.256 user 0m2.416s 00:06:32.256 sys 0m0.261s 00:06:32.256 17:15:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:32.256 17:15:51 -- common/autotest_common.sh@10 -- # set +x 00:06:32.256 ************************************ 00:06:32.256 END TEST accel_copy_crc32c 00:06:32.256 ************************************ 00:06:32.256 17:15:51 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:32.256 17:15:51 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:32.256 17:15:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.256 17:15:51 -- common/autotest_common.sh@10 -- # set +x 00:06:32.256 ************************************ 00:06:32.256 START TEST accel_copy_crc32c_C2 00:06:32.256 ************************************ 00:06:32.256 17:15:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:32.256 17:15:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.256 17:15:51 -- accel/accel.sh@17 -- # local accel_module 00:06:32.256 17:15:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:32.256 17:15:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:32.256 17:15:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.256 17:15:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.256 17:15:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.256 17:15:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.256 17:15:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.256 17:15:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.256 17:15:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.256 17:15:51 -- accel/accel.sh@42 -- # jq -r . 00:06:32.256 [2024-11-09 17:15:51.732805] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:32.256 [2024-11-09 17:15:51.732896] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid357029 ] 00:06:32.256 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.256 [2024-11-09 17:15:51.802270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.256 [2024-11-09 17:15:51.870171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.634 17:15:53 -- accel/accel.sh@18 -- # out=' 00:06:33.634 SPDK Configuration: 00:06:33.634 Core mask: 0x1 00:06:33.634 00:06:33.634 Accel Perf Configuration: 00:06:33.634 Workload Type: copy_crc32c 00:06:33.634 CRC-32C seed: 0 00:06:33.634 Vector size: 4096 bytes 00:06:33.634 Transfer size: 8192 bytes 00:06:33.634 Vector count 2 00:06:33.634 Module: software 00:06:33.634 Queue depth: 32 00:06:33.634 Allocate depth: 32 00:06:33.634 # threads/core: 1 00:06:33.634 Run time: 1 seconds 00:06:33.634 Verify: Yes 00:06:33.634 00:06:33.634 Running for 1 seconds... 00:06:33.634 00:06:33.634 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:33.634 ------------------------------------------------------------------------------------ 00:06:33.634 0,0 296960/s 2320 MiB/s 0 0 00:06:33.634 ==================================================================================== 00:06:33.634 Total 296960/s 1160 MiB/s 0 0' 00:06:33.634 17:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:33.634 17:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:33.634 17:15:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:33.634 17:15:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:33.634 17:15:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.634 17:15:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.634 17:15:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.634 17:15:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.634 17:15:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.634 17:15:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.634 17:15:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.634 17:15:53 -- accel/accel.sh@42 -- # jq -r . 00:06:33.634 [2024-11-09 17:15:53.062853] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:33.634 [2024-11-09 17:15:53.062941] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid357268 ] 00:06:33.634 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.634 [2024-11-09 17:15:53.132148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.634 [2024-11-09 17:15:53.199099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.634 17:15:53 -- accel/accel.sh@21 -- # val= 00:06:33.634 17:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.634 17:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:33.635 17:15:53 -- accel/accel.sh@21 -- # val= 00:06:33.635 17:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:33.635 17:15:53 -- accel/accel.sh@21 -- # val=0x1 00:06:33.635 17:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:33.635 17:15:53 -- accel/accel.sh@21 -- # val= 00:06:33.635 17:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:33.635 17:15:53 -- accel/accel.sh@21 -- # val= 00:06:33.635 17:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:33.635 17:15:53 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:33.635 17:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.635 17:15:53 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:33.635 17:15:53 -- accel/accel.sh@21 -- # val=0 00:06:33.635 17:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:33.635 17:15:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:33.635 17:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:33.635 17:15:53 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:33.635 17:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:33.635 17:15:53 -- accel/accel.sh@21 -- # val= 00:06:33.635 17:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:33.635 17:15:53 -- accel/accel.sh@21 -- # val=software 00:06:33.635 17:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.635 17:15:53 -- accel/accel.sh@23 -- # accel_module=software 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:33.635 17:15:53 -- accel/accel.sh@21 -- # val=32 00:06:33.635 17:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:33.635 17:15:53 -- accel/accel.sh@21 -- # val=32 00:06:33.635 17:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:33.635 17:15:53 -- accel/accel.sh@21 -- # val=1 00:06:33.635 17:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:33.635 17:15:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:33.635 17:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:33.635 17:15:53 -- accel/accel.sh@21 -- # val=Yes 00:06:33.635 17:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:33.635 17:15:53 -- accel/accel.sh@21 -- # val= 00:06:33.635 17:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:33.635 17:15:53 -- accel/accel.sh@21 -- # val= 00:06:33.635 17:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:33.635 17:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:35.013 17:15:54 -- accel/accel.sh@21 -- # val= 00:06:35.013 17:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.013 17:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:35.013 17:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:35.013 17:15:54 -- accel/accel.sh@21 -- # val= 00:06:35.013 17:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.013 17:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:35.013 17:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:35.013 17:15:54 -- accel/accel.sh@21 -- # val= 00:06:35.013 17:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.013 17:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:35.013 17:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:35.013 17:15:54 -- accel/accel.sh@21 -- # val= 00:06:35.013 17:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.013 17:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:35.013 17:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:35.013 17:15:54 -- accel/accel.sh@21 -- # val= 00:06:35.013 17:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.013 17:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:35.013 17:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:35.013 17:15:54 -- accel/accel.sh@21 -- # val= 00:06:35.013 17:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.013 17:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:35.013 17:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:35.013 17:15:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:35.013 17:15:54 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:35.013 17:15:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.013 00:06:35.013 real 0m2.663s 00:06:35.013 user 0m2.415s 00:06:35.013 sys 0m0.259s 00:06:35.013 17:15:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:35.013 17:15:54 -- common/autotest_common.sh@10 -- # set +x 00:06:35.013 ************************************ 00:06:35.013 END TEST accel_copy_crc32c_C2 00:06:35.013 ************************************ 00:06:35.013 17:15:54 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:35.013 17:15:54 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:35.013 17:15:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.013 17:15:54 -- common/autotest_common.sh@10 -- # set +x 00:06:35.013 ************************************ 00:06:35.013 START TEST accel_dualcast 00:06:35.013 ************************************ 00:06:35.013 17:15:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:35.013 17:15:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.013 17:15:54 -- accel/accel.sh@17 -- # local accel_module 00:06:35.013 17:15:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:35.013 17:15:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:35.013 17:15:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.013 17:15:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.013 17:15:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.013 17:15:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.013 17:15:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.013 17:15:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.013 17:15:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.013 17:15:54 -- accel/accel.sh@42 -- # jq -r . 00:06:35.013 [2024-11-09 17:15:54.444548] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:35.013 [2024-11-09 17:15:54.444644] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid357477 ] 00:06:35.013 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.013 [2024-11-09 17:15:54.512174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.013 [2024-11-09 17:15:54.580254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.392 17:15:55 -- accel/accel.sh@18 -- # out=' 00:06:36.392 SPDK Configuration: 00:06:36.392 Core mask: 0x1 00:06:36.392 00:06:36.392 Accel Perf Configuration: 00:06:36.392 Workload Type: dualcast 00:06:36.392 Transfer size: 4096 bytes 00:06:36.392 Vector count 1 00:06:36.392 Module: software 00:06:36.392 Queue depth: 32 00:06:36.392 Allocate depth: 32 00:06:36.392 # threads/core: 1 00:06:36.392 Run time: 1 seconds 00:06:36.392 Verify: Yes 00:06:36.392 00:06:36.392 Running for 1 seconds... 00:06:36.392 00:06:36.392 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:36.392 ------------------------------------------------------------------------------------ 00:06:36.392 0,0 623232/s 2434 MiB/s 0 0 00:06:36.393 ==================================================================================== 00:06:36.393 Total 623232/s 2434 MiB/s 0 0' 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:36.393 17:15:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:36.393 17:15:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:36.393 17:15:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.393 17:15:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.393 17:15:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.393 17:15:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.393 17:15:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.393 17:15:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.393 17:15:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.393 17:15:55 -- accel/accel.sh@42 -- # jq -r . 00:06:36.393 [2024-11-09 17:15:55.769647] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:36.393 [2024-11-09 17:15:55.769741] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid357644 ] 00:06:36.393 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.393 [2024-11-09 17:15:55.837880] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.393 [2024-11-09 17:15:55.909655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.393 17:15:55 -- accel/accel.sh@21 -- # val= 00:06:36.393 17:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:36.393 17:15:55 -- accel/accel.sh@21 -- # val= 00:06:36.393 17:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:36.393 17:15:55 -- accel/accel.sh@21 -- # val=0x1 00:06:36.393 17:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:36.393 17:15:55 -- accel/accel.sh@21 -- # val= 00:06:36.393 17:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:36.393 17:15:55 -- accel/accel.sh@21 -- # val= 00:06:36.393 17:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:36.393 17:15:55 -- accel/accel.sh@21 -- # val=dualcast 00:06:36.393 17:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.393 17:15:55 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:36.393 17:15:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:36.393 17:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:36.393 17:15:55 -- accel/accel.sh@21 -- # val= 00:06:36.393 17:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:36.393 17:15:55 -- accel/accel.sh@21 -- # val=software 00:06:36.393 17:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.393 17:15:55 -- accel/accel.sh@23 -- # accel_module=software 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:36.393 17:15:55 -- accel/accel.sh@21 -- # val=32 00:06:36.393 17:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:36.393 17:15:55 -- accel/accel.sh@21 -- # val=32 00:06:36.393 17:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:36.393 17:15:55 -- accel/accel.sh@21 -- # val=1 00:06:36.393 17:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:36.393 17:15:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:36.393 17:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:36.393 17:15:55 -- accel/accel.sh@21 -- # val=Yes 00:06:36.393 17:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:36.393 17:15:55 -- accel/accel.sh@21 -- # val= 00:06:36.393 17:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:36.393 17:15:55 -- accel/accel.sh@21 -- # val= 00:06:36.393 17:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:36.393 17:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:37.331 17:15:57 -- accel/accel.sh@21 -- # val= 00:06:37.331 17:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.331 17:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:37.331 17:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:37.331 17:15:57 -- accel/accel.sh@21 -- # val= 00:06:37.331 17:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.331 17:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:37.331 17:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:37.331 17:15:57 -- accel/accel.sh@21 -- # val= 00:06:37.331 17:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.331 17:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:37.331 17:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:37.331 17:15:57 -- accel/accel.sh@21 -- # val= 00:06:37.331 17:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.331 17:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:37.331 17:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:37.331 17:15:57 -- accel/accel.sh@21 -- # val= 00:06:37.331 17:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.331 17:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:37.331 17:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:37.331 17:15:57 -- accel/accel.sh@21 -- # val= 00:06:37.331 17:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.331 17:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:37.331 17:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:37.331 17:15:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:37.331 17:15:57 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:37.331 17:15:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.331 00:06:37.331 real 0m2.660s 00:06:37.331 user 0m2.412s 00:06:37.331 sys 0m0.256s 00:06:37.331 17:15:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:37.331 17:15:57 -- common/autotest_common.sh@10 -- # set +x 00:06:37.331 ************************************ 00:06:37.331 END TEST accel_dualcast 00:06:37.331 ************************************ 00:06:37.591 17:15:57 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:37.591 17:15:57 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:37.591 17:15:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.591 17:15:57 -- common/autotest_common.sh@10 -- # set +x 00:06:37.591 ************************************ 00:06:37.591 START TEST accel_compare 00:06:37.591 ************************************ 00:06:37.591 17:15:57 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:37.591 17:15:57 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.591 17:15:57 -- accel/accel.sh@17 -- # local accel_module 00:06:37.591 17:15:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:37.591 17:15:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:37.591 17:15:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.591 17:15:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.591 17:15:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.591 17:15:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.591 17:15:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.591 17:15:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.591 17:15:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.591 17:15:57 -- accel/accel.sh@42 -- # jq -r . 00:06:37.591 [2024-11-09 17:15:57.153224] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:37.591 [2024-11-09 17:15:57.153312] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid357895 ] 00:06:37.591 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.591 [2024-11-09 17:15:57.221547] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.591 [2024-11-09 17:15:57.291024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.970 17:15:58 -- accel/accel.sh@18 -- # out=' 00:06:38.970 SPDK Configuration: 00:06:38.970 Core mask: 0x1 00:06:38.970 00:06:38.970 Accel Perf Configuration: 00:06:38.970 Workload Type: compare 00:06:38.970 Transfer size: 4096 bytes 00:06:38.970 Vector count 1 00:06:38.970 Module: software 00:06:38.970 Queue depth: 32 00:06:38.970 Allocate depth: 32 00:06:38.970 # threads/core: 1 00:06:38.970 Run time: 1 seconds 00:06:38.970 Verify: Yes 00:06:38.970 00:06:38.970 Running for 1 seconds... 00:06:38.970 00:06:38.970 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:38.970 ------------------------------------------------------------------------------------ 00:06:38.970 0,0 791008/s 3089 MiB/s 0 0 00:06:38.970 ==================================================================================== 00:06:38.970 Total 791008/s 3089 MiB/s 0 0' 00:06:38.970 17:15:58 -- accel/accel.sh@20 -- # IFS=: 00:06:38.970 17:15:58 -- accel/accel.sh@20 -- # read -r var val 00:06:38.970 17:15:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:38.970 17:15:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:38.970 17:15:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.970 17:15:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.970 17:15:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.970 17:15:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.970 17:15:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.970 17:15:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.970 17:15:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.970 17:15:58 -- accel/accel.sh@42 -- # jq -r . 00:06:38.970 [2024-11-09 17:15:58.477681] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.970 [2024-11-09 17:15:58.477770] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid358162 ] 00:06:38.970 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.970 [2024-11-09 17:15:58.545409] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.970 [2024-11-09 17:15:58.611286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.970 17:15:58 -- accel/accel.sh@21 -- # val= 00:06:38.970 17:15:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.970 17:15:58 -- accel/accel.sh@20 -- # IFS=: 00:06:38.970 17:15:58 -- accel/accel.sh@20 -- # read -r var val 00:06:38.970 17:15:58 -- accel/accel.sh@21 -- # val= 00:06:38.970 17:15:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.970 17:15:58 -- accel/accel.sh@20 -- # IFS=: 00:06:38.970 17:15:58 -- accel/accel.sh@20 -- # read -r var val 00:06:38.970 17:15:58 -- accel/accel.sh@21 -- # val=0x1 00:06:38.970 17:15:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.970 17:15:58 -- accel/accel.sh@20 -- # IFS=: 00:06:38.970 17:15:58 -- accel/accel.sh@20 -- # read -r var val 00:06:38.970 17:15:58 -- accel/accel.sh@21 -- # val= 00:06:38.970 17:15:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.970 17:15:58 -- accel/accel.sh@20 -- # IFS=: 00:06:38.970 17:15:58 -- accel/accel.sh@20 -- # read -r var val 00:06:38.970 17:15:58 -- accel/accel.sh@21 -- # val= 00:06:38.970 17:15:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.970 17:15:58 -- accel/accel.sh@20 -- # IFS=: 00:06:38.970 17:15:58 -- accel/accel.sh@20 -- # read -r var val 00:06:38.970 17:15:58 -- accel/accel.sh@21 -- # val=compare 00:06:38.970 17:15:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.970 17:15:58 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:38.970 17:15:58 -- accel/accel.sh@20 -- # IFS=: 00:06:38.970 17:15:58 -- accel/accel.sh@20 -- # read -r var val 00:06:38.970 17:15:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:38.970 17:15:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.970 17:15:58 -- accel/accel.sh@20 -- # IFS=: 00:06:38.970 17:15:58 -- accel/accel.sh@20 -- # read -r var val 00:06:38.970 17:15:58 -- accel/accel.sh@21 -- # val= 00:06:38.970 17:15:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.970 17:15:58 -- accel/accel.sh@20 -- # IFS=: 00:06:38.970 17:15:58 -- accel/accel.sh@20 -- # read -r var val 00:06:38.970 17:15:58 -- accel/accel.sh@21 -- # val=software 00:06:38.970 17:15:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.970 17:15:58 -- accel/accel.sh@23 -- # accel_module=software 00:06:38.970 17:15:58 -- accel/accel.sh@20 -- # IFS=: 00:06:38.970 17:15:58 -- accel/accel.sh@20 -- # read -r var val 00:06:38.970 17:15:58 -- accel/accel.sh@21 -- # val=32 00:06:38.970 17:15:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.970 17:15:58 -- accel/accel.sh@20 -- # IFS=: 00:06:38.970 17:15:58 -- accel/accel.sh@20 -- # read -r var val 00:06:38.970 17:15:58 -- accel/accel.sh@21 -- # val=32 00:06:38.970 17:15:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.970 17:15:58 -- accel/accel.sh@20 -- # IFS=: 00:06:38.970 17:15:58 -- accel/accel.sh@20 -- # read -r var val 00:06:38.970 17:15:58 -- accel/accel.sh@21 -- # val=1 00:06:38.970 17:15:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.971 17:15:58 -- accel/accel.sh@20 -- # IFS=: 00:06:38.971 17:15:58 -- accel/accel.sh@20 -- # read -r var val 00:06:38.971 17:15:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:38.971 17:15:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.971 17:15:58 -- accel/accel.sh@20 -- # IFS=: 00:06:38.971 17:15:58 -- accel/accel.sh@20 -- # read -r var val 00:06:38.971 17:15:58 -- accel/accel.sh@21 -- # val=Yes 00:06:38.971 17:15:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.971 17:15:58 -- accel/accel.sh@20 -- # IFS=: 00:06:38.971 17:15:58 -- accel/accel.sh@20 -- # read -r var val 00:06:38.971 17:15:58 -- accel/accel.sh@21 -- # val= 00:06:38.971 17:15:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.971 17:15:58 -- accel/accel.sh@20 -- # IFS=: 00:06:38.971 17:15:58 -- accel/accel.sh@20 -- # read -r var val 00:06:38.971 17:15:58 -- accel/accel.sh@21 -- # val= 00:06:38.971 17:15:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.971 17:15:58 -- accel/accel.sh@20 -- # IFS=: 00:06:38.971 17:15:58 -- accel/accel.sh@20 -- # read -r var val 00:06:40.349 17:15:59 -- accel/accel.sh@21 -- # val= 00:06:40.349 17:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.349 17:15:59 -- accel/accel.sh@20 -- # IFS=: 00:06:40.349 17:15:59 -- accel/accel.sh@20 -- # read -r var val 00:06:40.349 17:15:59 -- accel/accel.sh@21 -- # val= 00:06:40.349 17:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.349 17:15:59 -- accel/accel.sh@20 -- # IFS=: 00:06:40.349 17:15:59 -- accel/accel.sh@20 -- # read -r var val 00:06:40.349 17:15:59 -- accel/accel.sh@21 -- # val= 00:06:40.349 17:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.349 17:15:59 -- accel/accel.sh@20 -- # IFS=: 00:06:40.349 17:15:59 -- accel/accel.sh@20 -- # read -r var val 00:06:40.349 17:15:59 -- accel/accel.sh@21 -- # val= 00:06:40.349 17:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.349 17:15:59 -- accel/accel.sh@20 -- # IFS=: 00:06:40.349 17:15:59 -- accel/accel.sh@20 -- # read -r var val 00:06:40.349 17:15:59 -- accel/accel.sh@21 -- # val= 00:06:40.349 17:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.349 17:15:59 -- accel/accel.sh@20 -- # IFS=: 00:06:40.349 17:15:59 -- accel/accel.sh@20 -- # read -r var val 00:06:40.349 17:15:59 -- accel/accel.sh@21 -- # val= 00:06:40.349 17:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.349 17:15:59 -- accel/accel.sh@20 -- # IFS=: 00:06:40.349 17:15:59 -- accel/accel.sh@20 -- # read -r var val 00:06:40.349 17:15:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:40.349 17:15:59 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:40.349 17:15:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.349 00:06:40.349 real 0m2.653s 00:06:40.349 user 0m2.407s 00:06:40.349 sys 0m0.254s 00:06:40.349 17:15:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.349 17:15:59 -- common/autotest_common.sh@10 -- # set +x 00:06:40.349 ************************************ 00:06:40.349 END TEST accel_compare 00:06:40.349 ************************************ 00:06:40.349 17:15:59 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:40.349 17:15:59 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:40.349 17:15:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.349 17:15:59 -- common/autotest_common.sh@10 -- # set +x 00:06:40.349 ************************************ 00:06:40.349 START TEST accel_xor 00:06:40.349 ************************************ 00:06:40.349 17:15:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:40.349 17:15:59 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.349 17:15:59 -- accel/accel.sh@17 -- # local accel_module 00:06:40.349 17:15:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:40.349 17:15:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:40.349 17:15:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.349 17:15:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.349 17:15:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.349 17:15:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.349 17:15:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.349 17:15:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.349 17:15:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.349 17:15:59 -- accel/accel.sh@42 -- # jq -r . 00:06:40.349 [2024-11-09 17:15:59.856874] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.349 [2024-11-09 17:15:59.856970] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid358451 ] 00:06:40.349 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.349 [2024-11-09 17:15:59.926869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.349 [2024-11-09 17:15:59.996248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.727 17:16:01 -- accel/accel.sh@18 -- # out=' 00:06:41.727 SPDK Configuration: 00:06:41.727 Core mask: 0x1 00:06:41.727 00:06:41.727 Accel Perf Configuration: 00:06:41.727 Workload Type: xor 00:06:41.727 Source buffers: 2 00:06:41.727 Transfer size: 4096 bytes 00:06:41.727 Vector count 1 00:06:41.727 Module: software 00:06:41.727 Queue depth: 32 00:06:41.727 Allocate depth: 32 00:06:41.727 # threads/core: 1 00:06:41.727 Run time: 1 seconds 00:06:41.727 Verify: Yes 00:06:41.727 00:06:41.727 Running for 1 seconds... 00:06:41.727 00:06:41.727 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:41.727 ------------------------------------------------------------------------------------ 00:06:41.727 0,0 670752/s 2620 MiB/s 0 0 00:06:41.727 ==================================================================================== 00:06:41.727 Total 670752/s 2620 MiB/s 0 0' 00:06:41.727 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:41.727 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:41.727 17:16:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:41.727 17:16:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:41.727 17:16:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.727 17:16:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.727 17:16:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.727 17:16:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.728 17:16:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.728 17:16:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.728 17:16:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.728 17:16:01 -- accel/accel.sh@42 -- # jq -r . 00:06:41.728 [2024-11-09 17:16:01.189611] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:41.728 [2024-11-09 17:16:01.189720] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid358722 ] 00:06:41.728 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.728 [2024-11-09 17:16:01.258812] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.728 [2024-11-09 17:16:01.324507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.728 17:16:01 -- accel/accel.sh@21 -- # val= 00:06:41.728 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:41.728 17:16:01 -- accel/accel.sh@21 -- # val= 00:06:41.728 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:41.728 17:16:01 -- accel/accel.sh@21 -- # val=0x1 00:06:41.728 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:41.728 17:16:01 -- accel/accel.sh@21 -- # val= 00:06:41.728 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:41.728 17:16:01 -- accel/accel.sh@21 -- # val= 00:06:41.728 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:41.728 17:16:01 -- accel/accel.sh@21 -- # val=xor 00:06:41.728 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.728 17:16:01 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:41.728 17:16:01 -- accel/accel.sh@21 -- # val=2 00:06:41.728 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:41.728 17:16:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:41.728 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:41.728 17:16:01 -- accel/accel.sh@21 -- # val= 00:06:41.728 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:41.728 17:16:01 -- accel/accel.sh@21 -- # val=software 00:06:41.728 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.728 17:16:01 -- accel/accel.sh@23 -- # accel_module=software 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:41.728 17:16:01 -- accel/accel.sh@21 -- # val=32 00:06:41.728 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:41.728 17:16:01 -- accel/accel.sh@21 -- # val=32 00:06:41.728 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:41.728 17:16:01 -- accel/accel.sh@21 -- # val=1 00:06:41.728 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:41.728 17:16:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:41.728 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:41.728 17:16:01 -- accel/accel.sh@21 -- # val=Yes 00:06:41.728 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:41.728 17:16:01 -- accel/accel.sh@21 -- # val= 00:06:41.728 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:41.728 17:16:01 -- accel/accel.sh@21 -- # val= 00:06:41.728 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:41.728 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:43.106 17:16:02 -- accel/accel.sh@21 -- # val= 00:06:43.106 17:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.106 17:16:02 -- accel/accel.sh@20 -- # IFS=: 00:06:43.106 17:16:02 -- accel/accel.sh@20 -- # read -r var val 00:06:43.106 17:16:02 -- accel/accel.sh@21 -- # val= 00:06:43.106 17:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.106 17:16:02 -- accel/accel.sh@20 -- # IFS=: 00:06:43.106 17:16:02 -- accel/accel.sh@20 -- # read -r var val 00:06:43.106 17:16:02 -- accel/accel.sh@21 -- # val= 00:06:43.106 17:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.106 17:16:02 -- accel/accel.sh@20 -- # IFS=: 00:06:43.106 17:16:02 -- accel/accel.sh@20 -- # read -r var val 00:06:43.106 17:16:02 -- accel/accel.sh@21 -- # val= 00:06:43.106 17:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.106 17:16:02 -- accel/accel.sh@20 -- # IFS=: 00:06:43.106 17:16:02 -- accel/accel.sh@20 -- # read -r var val 00:06:43.106 17:16:02 -- accel/accel.sh@21 -- # val= 00:06:43.106 17:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.106 17:16:02 -- accel/accel.sh@20 -- # IFS=: 00:06:43.106 17:16:02 -- accel/accel.sh@20 -- # read -r var val 00:06:43.106 17:16:02 -- accel/accel.sh@21 -- # val= 00:06:43.107 17:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.107 17:16:02 -- accel/accel.sh@20 -- # IFS=: 00:06:43.107 17:16:02 -- accel/accel.sh@20 -- # read -r var val 00:06:43.107 17:16:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:43.107 17:16:02 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:43.107 17:16:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.107 00:06:43.107 real 0m2.665s 00:06:43.107 user 0m2.414s 00:06:43.107 sys 0m0.259s 00:06:43.107 17:16:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:43.107 17:16:02 -- common/autotest_common.sh@10 -- # set +x 00:06:43.107 ************************************ 00:06:43.107 END TEST accel_xor 00:06:43.107 ************************************ 00:06:43.107 17:16:02 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:43.107 17:16:02 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:43.107 17:16:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.107 17:16:02 -- common/autotest_common.sh@10 -- # set +x 00:06:43.107 ************************************ 00:06:43.107 START TEST accel_xor 00:06:43.107 ************************************ 00:06:43.107 17:16:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:43.107 17:16:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.107 17:16:02 -- accel/accel.sh@17 -- # local accel_module 00:06:43.107 17:16:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:43.107 17:16:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:43.107 17:16:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.107 17:16:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.107 17:16:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.107 17:16:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.107 17:16:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.107 17:16:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.107 17:16:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.107 17:16:02 -- accel/accel.sh@42 -- # jq -r . 00:06:43.107 [2024-11-09 17:16:02.571316] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:43.107 [2024-11-09 17:16:02.571408] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid359003 ] 00:06:43.107 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.107 [2024-11-09 17:16:02.641367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.107 [2024-11-09 17:16:02.708642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.485 17:16:03 -- accel/accel.sh@18 -- # out=' 00:06:44.485 SPDK Configuration: 00:06:44.485 Core mask: 0x1 00:06:44.485 00:06:44.485 Accel Perf Configuration: 00:06:44.485 Workload Type: xor 00:06:44.485 Source buffers: 3 00:06:44.485 Transfer size: 4096 bytes 00:06:44.485 Vector count 1 00:06:44.485 Module: software 00:06:44.485 Queue depth: 32 00:06:44.485 Allocate depth: 32 00:06:44.485 # threads/core: 1 00:06:44.485 Run time: 1 seconds 00:06:44.485 Verify: Yes 00:06:44.485 00:06:44.485 Running for 1 seconds... 00:06:44.485 00:06:44.485 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:44.485 ------------------------------------------------------------------------------------ 00:06:44.485 0,0 660320/s 2579 MiB/s 0 0 00:06:44.485 ==================================================================================== 00:06:44.485 Total 660320/s 2579 MiB/s 0 0' 00:06:44.485 17:16:03 -- accel/accel.sh@20 -- # IFS=: 00:06:44.485 17:16:03 -- accel/accel.sh@20 -- # read -r var val 00:06:44.485 17:16:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:44.485 17:16:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:44.485 17:16:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.485 17:16:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.485 17:16:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.485 17:16:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.485 17:16:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.485 17:16:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.485 17:16:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.485 17:16:03 -- accel/accel.sh@42 -- # jq -r . 00:06:44.485 [2024-11-09 17:16:03.899048] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.485 [2024-11-09 17:16:03.899138] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid359266 ] 00:06:44.485 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.485 [2024-11-09 17:16:03.967344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.485 [2024-11-09 17:16:04.033106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.485 17:16:04 -- accel/accel.sh@21 -- # val= 00:06:44.485 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.485 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.485 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.485 17:16:04 -- accel/accel.sh@21 -- # val= 00:06:44.485 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.485 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.485 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.485 17:16:04 -- accel/accel.sh@21 -- # val=0x1 00:06:44.485 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.485 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.485 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.485 17:16:04 -- accel/accel.sh@21 -- # val= 00:06:44.485 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.485 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.485 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.485 17:16:04 -- accel/accel.sh@21 -- # val= 00:06:44.485 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.485 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.485 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.485 17:16:04 -- accel/accel.sh@21 -- # val=xor 00:06:44.485 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.485 17:16:04 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:44.485 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.485 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.485 17:16:04 -- accel/accel.sh@21 -- # val=3 00:06:44.485 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.485 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.485 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.485 17:16:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:44.485 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.485 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.485 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.485 17:16:04 -- accel/accel.sh@21 -- # val= 00:06:44.485 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.485 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.485 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.485 17:16:04 -- accel/accel.sh@21 -- # val=software 00:06:44.485 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.485 17:16:04 -- accel/accel.sh@23 -- # accel_module=software 00:06:44.485 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.485 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.485 17:16:04 -- accel/accel.sh@21 -- # val=32 00:06:44.485 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.485 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.485 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.485 17:16:04 -- accel/accel.sh@21 -- # val=32 00:06:44.485 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.485 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.485 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.485 17:16:04 -- accel/accel.sh@21 -- # val=1 00:06:44.485 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.485 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.485 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.485 17:16:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:44.485 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.485 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.485 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.485 17:16:04 -- accel/accel.sh@21 -- # val=Yes 00:06:44.485 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.485 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.486 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.486 17:16:04 -- accel/accel.sh@21 -- # val= 00:06:44.486 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.486 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.486 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.486 17:16:04 -- accel/accel.sh@21 -- # val= 00:06:44.486 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.486 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.486 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.865 17:16:05 -- accel/accel.sh@21 -- # val= 00:06:45.865 17:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.865 17:16:05 -- accel/accel.sh@20 -- # IFS=: 00:06:45.865 17:16:05 -- accel/accel.sh@20 -- # read -r var val 00:06:45.865 17:16:05 -- accel/accel.sh@21 -- # val= 00:06:45.865 17:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.865 17:16:05 -- accel/accel.sh@20 -- # IFS=: 00:06:45.865 17:16:05 -- accel/accel.sh@20 -- # read -r var val 00:06:45.865 17:16:05 -- accel/accel.sh@21 -- # val= 00:06:45.865 17:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.865 17:16:05 -- accel/accel.sh@20 -- # IFS=: 00:06:45.865 17:16:05 -- accel/accel.sh@20 -- # read -r var val 00:06:45.865 17:16:05 -- accel/accel.sh@21 -- # val= 00:06:45.865 17:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.865 17:16:05 -- accel/accel.sh@20 -- # IFS=: 00:06:45.865 17:16:05 -- accel/accel.sh@20 -- # read -r var val 00:06:45.865 17:16:05 -- accel/accel.sh@21 -- # val= 00:06:45.865 17:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.865 17:16:05 -- accel/accel.sh@20 -- # IFS=: 00:06:45.865 17:16:05 -- accel/accel.sh@20 -- # read -r var val 00:06:45.865 17:16:05 -- accel/accel.sh@21 -- # val= 00:06:45.865 17:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.865 17:16:05 -- accel/accel.sh@20 -- # IFS=: 00:06:45.865 17:16:05 -- accel/accel.sh@20 -- # read -r var val 00:06:45.865 17:16:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:45.865 17:16:05 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:45.865 17:16:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.865 00:06:45.865 real 0m2.661s 00:06:45.865 user 0m2.412s 00:06:45.865 sys 0m0.257s 00:06:45.865 17:16:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.865 17:16:05 -- common/autotest_common.sh@10 -- # set +x 00:06:45.865 ************************************ 00:06:45.865 END TEST accel_xor 00:06:45.865 ************************************ 00:06:45.865 17:16:05 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:45.865 17:16:05 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:45.865 17:16:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.865 17:16:05 -- common/autotest_common.sh@10 -- # set +x 00:06:45.865 ************************************ 00:06:45.865 START TEST accel_dif_verify 00:06:45.865 ************************************ 00:06:45.865 17:16:05 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:45.865 17:16:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:45.865 17:16:05 -- accel/accel.sh@17 -- # local accel_module 00:06:45.865 17:16:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:45.865 17:16:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:45.865 17:16:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.865 17:16:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.865 17:16:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.865 17:16:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.865 17:16:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.865 17:16:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.865 17:16:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.865 17:16:05 -- accel/accel.sh@42 -- # jq -r . 00:06:45.865 [2024-11-09 17:16:05.279083] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.865 [2024-11-09 17:16:05.279158] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid359469 ] 00:06:45.865 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.865 [2024-11-09 17:16:05.347167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.865 [2024-11-09 17:16:05.415671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.244 17:16:06 -- accel/accel.sh@18 -- # out=' 00:06:47.244 SPDK Configuration: 00:06:47.244 Core mask: 0x1 00:06:47.244 00:06:47.244 Accel Perf Configuration: 00:06:47.244 Workload Type: dif_verify 00:06:47.244 Vector size: 4096 bytes 00:06:47.244 Transfer size: 4096 bytes 00:06:47.244 Block size: 512 bytes 00:06:47.244 Metadata size: 8 bytes 00:06:47.244 Vector count 1 00:06:47.244 Module: software 00:06:47.244 Queue depth: 32 00:06:47.244 Allocate depth: 32 00:06:47.244 # threads/core: 1 00:06:47.244 Run time: 1 seconds 00:06:47.244 Verify: No 00:06:47.244 00:06:47.244 Running for 1 seconds... 00:06:47.244 00:06:47.244 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:47.244 ------------------------------------------------------------------------------------ 00:06:47.244 0,0 244288/s 969 MiB/s 0 0 00:06:47.244 ==================================================================================== 00:06:47.244 Total 244288/s 954 MiB/s 0 0' 00:06:47.244 17:16:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.244 17:16:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.244 17:16:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:47.244 17:16:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:47.244 17:16:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.244 17:16:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.244 17:16:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.244 17:16:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.244 17:16:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.244 17:16:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.244 17:16:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.244 17:16:06 -- accel/accel.sh@42 -- # jq -r . 00:06:47.244 [2024-11-09 17:16:06.607926] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:47.244 [2024-11-09 17:16:06.608016] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid359630 ] 00:06:47.244 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.244 [2024-11-09 17:16:06.677972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.244 [2024-11-09 17:16:06.745393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.244 17:16:06 -- accel/accel.sh@21 -- # val= 00:06:47.244 17:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.244 17:16:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.244 17:16:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.244 17:16:06 -- accel/accel.sh@21 -- # val= 00:06:47.244 17:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.245 17:16:06 -- accel/accel.sh@21 -- # val=0x1 00:06:47.245 17:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.245 17:16:06 -- accel/accel.sh@21 -- # val= 00:06:47.245 17:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.245 17:16:06 -- accel/accel.sh@21 -- # val= 00:06:47.245 17:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.245 17:16:06 -- accel/accel.sh@21 -- # val=dif_verify 00:06:47.245 17:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.245 17:16:06 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.245 17:16:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:47.245 17:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.245 17:16:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:47.245 17:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.245 17:16:06 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:47.245 17:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.245 17:16:06 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:47.245 17:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.245 17:16:06 -- accel/accel.sh@21 -- # val= 00:06:47.245 17:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.245 17:16:06 -- accel/accel.sh@21 -- # val=software 00:06:47.245 17:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.245 17:16:06 -- accel/accel.sh@23 -- # accel_module=software 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.245 17:16:06 -- accel/accel.sh@21 -- # val=32 00:06:47.245 17:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.245 17:16:06 -- accel/accel.sh@21 -- # val=32 00:06:47.245 17:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.245 17:16:06 -- accel/accel.sh@21 -- # val=1 00:06:47.245 17:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.245 17:16:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:47.245 17:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.245 17:16:06 -- accel/accel.sh@21 -- # val=No 00:06:47.245 17:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.245 17:16:06 -- accel/accel.sh@21 -- # val= 00:06:47.245 17:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.245 17:16:06 -- accel/accel.sh@21 -- # val= 00:06:47.245 17:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.245 17:16:06 -- accel/accel.sh@20 -- # read -r var val 00:06:48.183 17:16:07 -- accel/accel.sh@21 -- # val= 00:06:48.183 17:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.183 17:16:07 -- accel/accel.sh@20 -- # IFS=: 00:06:48.183 17:16:07 -- accel/accel.sh@20 -- # read -r var val 00:06:48.183 17:16:07 -- accel/accel.sh@21 -- # val= 00:06:48.183 17:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.183 17:16:07 -- accel/accel.sh@20 -- # IFS=: 00:06:48.183 17:16:07 -- accel/accel.sh@20 -- # read -r var val 00:06:48.183 17:16:07 -- accel/accel.sh@21 -- # val= 00:06:48.183 17:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.183 17:16:07 -- accel/accel.sh@20 -- # IFS=: 00:06:48.183 17:16:07 -- accel/accel.sh@20 -- # read -r var val 00:06:48.183 17:16:07 -- accel/accel.sh@21 -- # val= 00:06:48.183 17:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.183 17:16:07 -- accel/accel.sh@20 -- # IFS=: 00:06:48.183 17:16:07 -- accel/accel.sh@20 -- # read -r var val 00:06:48.183 17:16:07 -- accel/accel.sh@21 -- # val= 00:06:48.183 17:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.183 17:16:07 -- accel/accel.sh@20 -- # IFS=: 00:06:48.183 17:16:07 -- accel/accel.sh@20 -- # read -r var val 00:06:48.183 17:16:07 -- accel/accel.sh@21 -- # val= 00:06:48.183 17:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.183 17:16:07 -- accel/accel.sh@20 -- # IFS=: 00:06:48.183 17:16:07 -- accel/accel.sh@20 -- # read -r var val 00:06:48.183 17:16:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:48.183 17:16:07 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:48.183 17:16:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.183 00:06:48.183 real 0m2.664s 00:06:48.183 user 0m2.414s 00:06:48.183 sys 0m0.260s 00:06:48.183 17:16:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:48.183 17:16:07 -- common/autotest_common.sh@10 -- # set +x 00:06:48.183 ************************************ 00:06:48.183 END TEST accel_dif_verify 00:06:48.183 ************************************ 00:06:48.443 17:16:07 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:48.443 17:16:07 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:48.443 17:16:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.443 17:16:07 -- common/autotest_common.sh@10 -- # set +x 00:06:48.443 ************************************ 00:06:48.443 START TEST accel_dif_generate 00:06:48.443 ************************************ 00:06:48.443 17:16:07 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:48.443 17:16:07 -- accel/accel.sh@16 -- # local accel_opc 00:06:48.443 17:16:07 -- accel/accel.sh@17 -- # local accel_module 00:06:48.443 17:16:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:48.443 17:16:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:48.443 17:16:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.443 17:16:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.443 17:16:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.443 17:16:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.443 17:16:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.443 17:16:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.443 17:16:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.443 17:16:07 -- accel/accel.sh@42 -- # jq -r . 00:06:48.443 [2024-11-09 17:16:07.993023] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.443 [2024-11-09 17:16:07.993117] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid359865 ] 00:06:48.443 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.443 [2024-11-09 17:16:08.066738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.443 [2024-11-09 17:16:08.136854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.822 17:16:09 -- accel/accel.sh@18 -- # out=' 00:06:49.822 SPDK Configuration: 00:06:49.822 Core mask: 0x1 00:06:49.822 00:06:49.822 Accel Perf Configuration: 00:06:49.822 Workload Type: dif_generate 00:06:49.822 Vector size: 4096 bytes 00:06:49.822 Transfer size: 4096 bytes 00:06:49.822 Block size: 512 bytes 00:06:49.822 Metadata size: 8 bytes 00:06:49.822 Vector count 1 00:06:49.822 Module: software 00:06:49.822 Queue depth: 32 00:06:49.822 Allocate depth: 32 00:06:49.822 # threads/core: 1 00:06:49.822 Run time: 1 seconds 00:06:49.822 Verify: No 00:06:49.822 00:06:49.822 Running for 1 seconds... 00:06:49.822 00:06:49.822 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:49.822 ------------------------------------------------------------------------------------ 00:06:49.822 0,0 285568/s 1132 MiB/s 0 0 00:06:49.822 ==================================================================================== 00:06:49.822 Total 285568/s 1115 MiB/s 0 0' 00:06:49.822 17:16:09 -- accel/accel.sh@20 -- # IFS=: 00:06:49.822 17:16:09 -- accel/accel.sh@20 -- # read -r var val 00:06:49.822 17:16:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:49.822 17:16:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:49.822 17:16:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.822 17:16:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.822 17:16:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.822 17:16:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.822 17:16:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.822 17:16:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.822 17:16:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.822 17:16:09 -- accel/accel.sh@42 -- # jq -r . 00:06:49.822 [2024-11-09 17:16:09.328645] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:49.823 [2024-11-09 17:16:09.328736] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid360139 ] 00:06:49.823 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.823 [2024-11-09 17:16:09.397841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.823 [2024-11-09 17:16:09.463886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.823 17:16:09 -- accel/accel.sh@21 -- # val= 00:06:49.823 17:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # IFS=: 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # read -r var val 00:06:49.823 17:16:09 -- accel/accel.sh@21 -- # val= 00:06:49.823 17:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # IFS=: 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # read -r var val 00:06:49.823 17:16:09 -- accel/accel.sh@21 -- # val=0x1 00:06:49.823 17:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # IFS=: 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # read -r var val 00:06:49.823 17:16:09 -- accel/accel.sh@21 -- # val= 00:06:49.823 17:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # IFS=: 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # read -r var val 00:06:49.823 17:16:09 -- accel/accel.sh@21 -- # val= 00:06:49.823 17:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # IFS=: 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # read -r var val 00:06:49.823 17:16:09 -- accel/accel.sh@21 -- # val=dif_generate 00:06:49.823 17:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.823 17:16:09 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # IFS=: 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # read -r var val 00:06:49.823 17:16:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:49.823 17:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # IFS=: 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # read -r var val 00:06:49.823 17:16:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:49.823 17:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # IFS=: 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # read -r var val 00:06:49.823 17:16:09 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:49.823 17:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # IFS=: 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # read -r var val 00:06:49.823 17:16:09 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:49.823 17:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # IFS=: 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # read -r var val 00:06:49.823 17:16:09 -- accel/accel.sh@21 -- # val= 00:06:49.823 17:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # IFS=: 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # read -r var val 00:06:49.823 17:16:09 -- accel/accel.sh@21 -- # val=software 00:06:49.823 17:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.823 17:16:09 -- accel/accel.sh@23 -- # accel_module=software 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # IFS=: 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # read -r var val 00:06:49.823 17:16:09 -- accel/accel.sh@21 -- # val=32 00:06:49.823 17:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # IFS=: 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # read -r var val 00:06:49.823 17:16:09 -- accel/accel.sh@21 -- # val=32 00:06:49.823 17:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # IFS=: 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # read -r var val 00:06:49.823 17:16:09 -- accel/accel.sh@21 -- # val=1 00:06:49.823 17:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # IFS=: 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # read -r var val 00:06:49.823 17:16:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:49.823 17:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # IFS=: 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # read -r var val 00:06:49.823 17:16:09 -- accel/accel.sh@21 -- # val=No 00:06:49.823 17:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # IFS=: 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # read -r var val 00:06:49.823 17:16:09 -- accel/accel.sh@21 -- # val= 00:06:49.823 17:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # IFS=: 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # read -r var val 00:06:49.823 17:16:09 -- accel/accel.sh@21 -- # val= 00:06:49.823 17:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # IFS=: 00:06:49.823 17:16:09 -- accel/accel.sh@20 -- # read -r var val 00:06:51.201 17:16:10 -- accel/accel.sh@21 -- # val= 00:06:51.201 17:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.201 17:16:10 -- accel/accel.sh@20 -- # IFS=: 00:06:51.201 17:16:10 -- accel/accel.sh@20 -- # read -r var val 00:06:51.201 17:16:10 -- accel/accel.sh@21 -- # val= 00:06:51.201 17:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.201 17:16:10 -- accel/accel.sh@20 -- # IFS=: 00:06:51.201 17:16:10 -- accel/accel.sh@20 -- # read -r var val 00:06:51.201 17:16:10 -- accel/accel.sh@21 -- # val= 00:06:51.201 17:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.201 17:16:10 -- accel/accel.sh@20 -- # IFS=: 00:06:51.201 17:16:10 -- accel/accel.sh@20 -- # read -r var val 00:06:51.201 17:16:10 -- accel/accel.sh@21 -- # val= 00:06:51.201 17:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.201 17:16:10 -- accel/accel.sh@20 -- # IFS=: 00:06:51.201 17:16:10 -- accel/accel.sh@20 -- # read -r var val 00:06:51.201 17:16:10 -- accel/accel.sh@21 -- # val= 00:06:51.201 17:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.201 17:16:10 -- accel/accel.sh@20 -- # IFS=: 00:06:51.201 17:16:10 -- accel/accel.sh@20 -- # read -r var val 00:06:51.201 17:16:10 -- accel/accel.sh@21 -- # val= 00:06:51.201 17:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.201 17:16:10 -- accel/accel.sh@20 -- # IFS=: 00:06:51.201 17:16:10 -- accel/accel.sh@20 -- # read -r var val 00:06:51.201 17:16:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:51.201 17:16:10 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:51.201 17:16:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.201 00:06:51.201 real 0m2.671s 00:06:51.201 user 0m2.424s 00:06:51.201 sys 0m0.256s 00:06:51.201 17:16:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:51.201 17:16:10 -- common/autotest_common.sh@10 -- # set +x 00:06:51.201 ************************************ 00:06:51.201 END TEST accel_dif_generate 00:06:51.201 ************************************ 00:06:51.201 17:16:10 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:51.201 17:16:10 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:51.201 17:16:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.201 17:16:10 -- common/autotest_common.sh@10 -- # set +x 00:06:51.201 ************************************ 00:06:51.201 START TEST accel_dif_generate_copy 00:06:51.201 ************************************ 00:06:51.201 17:16:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:51.201 17:16:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:51.201 17:16:10 -- accel/accel.sh@17 -- # local accel_module 00:06:51.201 17:16:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:51.201 17:16:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.201 17:16:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:51.201 17:16:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.201 17:16:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.201 17:16:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.201 17:16:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.201 17:16:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.201 17:16:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.201 17:16:10 -- accel/accel.sh@42 -- # jq -r . 00:06:51.201 [2024-11-09 17:16:10.710391] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:51.201 [2024-11-09 17:16:10.710479] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid360420 ] 00:06:51.201 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.201 [2024-11-09 17:16:10.780938] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.201 [2024-11-09 17:16:10.848974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.580 17:16:12 -- accel/accel.sh@18 -- # out=' 00:06:52.580 SPDK Configuration: 00:06:52.580 Core mask: 0x1 00:06:52.580 00:06:52.580 Accel Perf Configuration: 00:06:52.580 Workload Type: dif_generate_copy 00:06:52.580 Vector size: 4096 bytes 00:06:52.580 Transfer size: 4096 bytes 00:06:52.580 Vector count 1 00:06:52.580 Module: software 00:06:52.580 Queue depth: 32 00:06:52.580 Allocate depth: 32 00:06:52.580 # threads/core: 1 00:06:52.580 Run time: 1 seconds 00:06:52.580 Verify: No 00:06:52.580 00:06:52.580 Running for 1 seconds... 00:06:52.580 00:06:52.580 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:52.580 ------------------------------------------------------------------------------------ 00:06:52.580 0,0 222144/s 881 MiB/s 0 0 00:06:52.580 ==================================================================================== 00:06:52.580 Total 222144/s 867 MiB/s 0 0' 00:06:52.580 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:52.580 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:52.580 17:16:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:52.580 17:16:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:52.580 17:16:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.580 17:16:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.580 17:16:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.580 17:16:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.580 17:16:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.580 17:16:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.581 17:16:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.581 17:16:12 -- accel/accel.sh@42 -- # jq -r . 00:06:52.581 [2024-11-09 17:16:12.039893] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.581 [2024-11-09 17:16:12.039982] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid360688 ] 00:06:52.581 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.581 [2024-11-09 17:16:12.109185] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.581 [2024-11-09 17:16:12.179697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.581 17:16:12 -- accel/accel.sh@21 -- # val= 00:06:52.581 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:52.581 17:16:12 -- accel/accel.sh@21 -- # val= 00:06:52.581 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:52.581 17:16:12 -- accel/accel.sh@21 -- # val=0x1 00:06:52.581 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:52.581 17:16:12 -- accel/accel.sh@21 -- # val= 00:06:52.581 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:52.581 17:16:12 -- accel/accel.sh@21 -- # val= 00:06:52.581 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:52.581 17:16:12 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:52.581 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.581 17:16:12 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:52.581 17:16:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:52.581 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:52.581 17:16:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:52.581 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:52.581 17:16:12 -- accel/accel.sh@21 -- # val= 00:06:52.581 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:52.581 17:16:12 -- accel/accel.sh@21 -- # val=software 00:06:52.581 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.581 17:16:12 -- accel/accel.sh@23 -- # accel_module=software 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:52.581 17:16:12 -- accel/accel.sh@21 -- # val=32 00:06:52.581 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:52.581 17:16:12 -- accel/accel.sh@21 -- # val=32 00:06:52.581 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:52.581 17:16:12 -- accel/accel.sh@21 -- # val=1 00:06:52.581 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:52.581 17:16:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:52.581 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:52.581 17:16:12 -- accel/accel.sh@21 -- # val=No 00:06:52.581 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:52.581 17:16:12 -- accel/accel.sh@21 -- # val= 00:06:52.581 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:52.581 17:16:12 -- accel/accel.sh@21 -- # val= 00:06:52.581 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:52.581 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.962 17:16:13 -- accel/accel.sh@21 -- # val= 00:06:53.962 17:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.962 17:16:13 -- accel/accel.sh@20 -- # IFS=: 00:06:53.962 17:16:13 -- accel/accel.sh@20 -- # read -r var val 00:06:53.962 17:16:13 -- accel/accel.sh@21 -- # val= 00:06:53.962 17:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.962 17:16:13 -- accel/accel.sh@20 -- # IFS=: 00:06:53.962 17:16:13 -- accel/accel.sh@20 -- # read -r var val 00:06:53.962 17:16:13 -- accel/accel.sh@21 -- # val= 00:06:53.962 17:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.962 17:16:13 -- accel/accel.sh@20 -- # IFS=: 00:06:53.962 17:16:13 -- accel/accel.sh@20 -- # read -r var val 00:06:53.962 17:16:13 -- accel/accel.sh@21 -- # val= 00:06:53.962 17:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.962 17:16:13 -- accel/accel.sh@20 -- # IFS=: 00:06:53.962 17:16:13 -- accel/accel.sh@20 -- # read -r var val 00:06:53.962 17:16:13 -- accel/accel.sh@21 -- # val= 00:06:53.962 17:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.962 17:16:13 -- accel/accel.sh@20 -- # IFS=: 00:06:53.962 17:16:13 -- accel/accel.sh@20 -- # read -r var val 00:06:53.962 17:16:13 -- accel/accel.sh@21 -- # val= 00:06:53.962 17:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.962 17:16:13 -- accel/accel.sh@20 -- # IFS=: 00:06:53.962 17:16:13 -- accel/accel.sh@20 -- # read -r var val 00:06:53.962 17:16:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:53.962 17:16:13 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:53.962 17:16:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.962 00:06:53.962 real 0m2.664s 00:06:53.962 user 0m2.412s 00:06:53.962 sys 0m0.261s 00:06:53.962 17:16:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:53.962 17:16:13 -- common/autotest_common.sh@10 -- # set +x 00:06:53.962 ************************************ 00:06:53.962 END TEST accel_dif_generate_copy 00:06:53.962 ************************************ 00:06:53.962 17:16:13 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:53.962 17:16:13 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:53.962 17:16:13 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:53.962 17:16:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.962 17:16:13 -- common/autotest_common.sh@10 -- # set +x 00:06:53.962 ************************************ 00:06:53.962 START TEST accel_comp 00:06:53.962 ************************************ 00:06:53.962 17:16:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:53.962 17:16:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.962 17:16:13 -- accel/accel.sh@17 -- # local accel_module 00:06:53.962 17:16:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:53.962 17:16:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:53.962 17:16:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.962 17:16:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.962 17:16:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.962 17:16:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.962 17:16:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.962 17:16:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.962 17:16:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.962 17:16:13 -- accel/accel.sh@42 -- # jq -r . 00:06:53.962 [2024-11-09 17:16:13.425361] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:53.962 [2024-11-09 17:16:13.425452] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid360979 ] 00:06:53.962 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.962 [2024-11-09 17:16:13.493945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.962 [2024-11-09 17:16:13.560848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.342 17:16:14 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:55.342 00:06:55.342 SPDK Configuration: 00:06:55.342 Core mask: 0x1 00:06:55.342 00:06:55.342 Accel Perf Configuration: 00:06:55.342 Workload Type: compress 00:06:55.342 Transfer size: 4096 bytes 00:06:55.342 Vector count 1 00:06:55.342 Module: software 00:06:55.342 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:55.342 Queue depth: 32 00:06:55.342 Allocate depth: 32 00:06:55.342 # threads/core: 1 00:06:55.342 Run time: 1 seconds 00:06:55.342 Verify: No 00:06:55.342 00:06:55.342 Running for 1 seconds... 00:06:55.342 00:06:55.342 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:55.342 ------------------------------------------------------------------------------------ 00:06:55.342 0,0 68160/s 284 MiB/s 0 0 00:06:55.342 ==================================================================================== 00:06:55.342 Total 68160/s 266 MiB/s 0 0' 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # IFS=: 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # read -r var val 00:06:55.342 17:16:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:55.342 17:16:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:55.342 17:16:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.342 17:16:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.342 17:16:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.342 17:16:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.342 17:16:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.342 17:16:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.342 17:16:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.342 17:16:14 -- accel/accel.sh@42 -- # jq -r . 00:06:55.342 [2024-11-09 17:16:14.752680] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.342 [2024-11-09 17:16:14.752769] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid361247 ] 00:06:55.342 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.342 [2024-11-09 17:16:14.820635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.342 [2024-11-09 17:16:14.888879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.342 17:16:14 -- accel/accel.sh@21 -- # val= 00:06:55.342 17:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # IFS=: 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # read -r var val 00:06:55.342 17:16:14 -- accel/accel.sh@21 -- # val= 00:06:55.342 17:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # IFS=: 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # read -r var val 00:06:55.342 17:16:14 -- accel/accel.sh@21 -- # val= 00:06:55.342 17:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # IFS=: 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # read -r var val 00:06:55.342 17:16:14 -- accel/accel.sh@21 -- # val=0x1 00:06:55.342 17:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # IFS=: 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # read -r var val 00:06:55.342 17:16:14 -- accel/accel.sh@21 -- # val= 00:06:55.342 17:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # IFS=: 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # read -r var val 00:06:55.342 17:16:14 -- accel/accel.sh@21 -- # val= 00:06:55.342 17:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # IFS=: 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # read -r var val 00:06:55.342 17:16:14 -- accel/accel.sh@21 -- # val=compress 00:06:55.342 17:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.342 17:16:14 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # IFS=: 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # read -r var val 00:06:55.342 17:16:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.342 17:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # IFS=: 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # read -r var val 00:06:55.342 17:16:14 -- accel/accel.sh@21 -- # val= 00:06:55.342 17:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # IFS=: 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # read -r var val 00:06:55.342 17:16:14 -- accel/accel.sh@21 -- # val=software 00:06:55.342 17:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.342 17:16:14 -- accel/accel.sh@23 -- # accel_module=software 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # IFS=: 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # read -r var val 00:06:55.342 17:16:14 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:55.342 17:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # IFS=: 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # read -r var val 00:06:55.342 17:16:14 -- accel/accel.sh@21 -- # val=32 00:06:55.342 17:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # IFS=: 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # read -r var val 00:06:55.342 17:16:14 -- accel/accel.sh@21 -- # val=32 00:06:55.342 17:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # IFS=: 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # read -r var val 00:06:55.342 17:16:14 -- accel/accel.sh@21 -- # val=1 00:06:55.342 17:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # IFS=: 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # read -r var val 00:06:55.342 17:16:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:55.342 17:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # IFS=: 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # read -r var val 00:06:55.342 17:16:14 -- accel/accel.sh@21 -- # val=No 00:06:55.342 17:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # IFS=: 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # read -r var val 00:06:55.342 17:16:14 -- accel/accel.sh@21 -- # val= 00:06:55.342 17:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # IFS=: 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # read -r var val 00:06:55.342 17:16:14 -- accel/accel.sh@21 -- # val= 00:06:55.342 17:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # IFS=: 00:06:55.342 17:16:14 -- accel/accel.sh@20 -- # read -r var val 00:06:56.722 17:16:16 -- accel/accel.sh@21 -- # val= 00:06:56.722 17:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.722 17:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:56.722 17:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:56.722 17:16:16 -- accel/accel.sh@21 -- # val= 00:06:56.722 17:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.722 17:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:56.722 17:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:56.722 17:16:16 -- accel/accel.sh@21 -- # val= 00:06:56.722 17:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.722 17:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:56.722 17:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:56.722 17:16:16 -- accel/accel.sh@21 -- # val= 00:06:56.722 17:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.722 17:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:56.722 17:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:56.722 17:16:16 -- accel/accel.sh@21 -- # val= 00:06:56.722 17:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.722 17:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:56.722 17:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:56.722 17:16:16 -- accel/accel.sh@21 -- # val= 00:06:56.722 17:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.722 17:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:56.722 17:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:56.722 17:16:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.722 17:16:16 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:56.722 17:16:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.722 00:06:56.722 real 0m2.663s 00:06:56.722 user 0m2.420s 00:06:56.722 sys 0m0.253s 00:06:56.722 17:16:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:56.722 17:16:16 -- common/autotest_common.sh@10 -- # set +x 00:06:56.722 ************************************ 00:06:56.722 END TEST accel_comp 00:06:56.722 ************************************ 00:06:56.722 17:16:16 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:56.722 17:16:16 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:56.722 17:16:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.722 17:16:16 -- common/autotest_common.sh@10 -- # set +x 00:06:56.722 ************************************ 00:06:56.722 START TEST accel_decomp 00:06:56.722 ************************************ 00:06:56.722 17:16:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:56.722 17:16:16 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.722 17:16:16 -- accel/accel.sh@17 -- # local accel_module 00:06:56.722 17:16:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:56.723 17:16:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:56.723 17:16:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.723 17:16:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.723 17:16:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.723 17:16:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.723 17:16:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.723 17:16:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.723 17:16:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.723 17:16:16 -- accel/accel.sh@42 -- # jq -r . 00:06:56.723 [2024-11-09 17:16:16.135036] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.723 [2024-11-09 17:16:16.135114] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid361455 ] 00:06:56.723 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.723 [2024-11-09 17:16:16.203319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.723 [2024-11-09 17:16:16.277800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.133 17:16:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:58.133 00:06:58.133 SPDK Configuration: 00:06:58.133 Core mask: 0x1 00:06:58.133 00:06:58.133 Accel Perf Configuration: 00:06:58.133 Workload Type: decompress 00:06:58.133 Transfer size: 4096 bytes 00:06:58.133 Vector count 1 00:06:58.133 Module: software 00:06:58.133 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:58.133 Queue depth: 32 00:06:58.133 Allocate depth: 32 00:06:58.133 # threads/core: 1 00:06:58.133 Run time: 1 seconds 00:06:58.133 Verify: Yes 00:06:58.133 00:06:58.133 Running for 1 seconds... 00:06:58.133 00:06:58.133 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:58.133 ------------------------------------------------------------------------------------ 00:06:58.133 0,0 91776/s 169 MiB/s 0 0 00:06:58.133 ==================================================================================== 00:06:58.133 Total 91776/s 358 MiB/s 0 0' 00:06:58.133 17:16:17 -- accel/accel.sh@20 -- # IFS=: 00:06:58.133 17:16:17 -- accel/accel.sh@20 -- # read -r var val 00:06:58.133 17:16:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:58.133 17:16:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:58.134 17:16:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.134 17:16:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.134 17:16:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.134 17:16:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.134 17:16:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.134 17:16:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.134 17:16:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.134 17:16:17 -- accel/accel.sh@42 -- # jq -r . 00:06:58.134 [2024-11-09 17:16:17.470474] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.134 [2024-11-09 17:16:17.470578] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid361628 ] 00:06:58.134 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.134 [2024-11-09 17:16:17.539787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.134 [2024-11-09 17:16:17.606376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.134 17:16:17 -- accel/accel.sh@21 -- # val= 00:06:58.134 17:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # IFS=: 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # read -r var val 00:06:58.134 17:16:17 -- accel/accel.sh@21 -- # val= 00:06:58.134 17:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # IFS=: 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # read -r var val 00:06:58.134 17:16:17 -- accel/accel.sh@21 -- # val= 00:06:58.134 17:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # IFS=: 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # read -r var val 00:06:58.134 17:16:17 -- accel/accel.sh@21 -- # val=0x1 00:06:58.134 17:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # IFS=: 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # read -r var val 00:06:58.134 17:16:17 -- accel/accel.sh@21 -- # val= 00:06:58.134 17:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # IFS=: 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # read -r var val 00:06:58.134 17:16:17 -- accel/accel.sh@21 -- # val= 00:06:58.134 17:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # IFS=: 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # read -r var val 00:06:58.134 17:16:17 -- accel/accel.sh@21 -- # val=decompress 00:06:58.134 17:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.134 17:16:17 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # IFS=: 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # read -r var val 00:06:58.134 17:16:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.134 17:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # IFS=: 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # read -r var val 00:06:58.134 17:16:17 -- accel/accel.sh@21 -- # val= 00:06:58.134 17:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # IFS=: 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # read -r var val 00:06:58.134 17:16:17 -- accel/accel.sh@21 -- # val=software 00:06:58.134 17:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.134 17:16:17 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # IFS=: 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # read -r var val 00:06:58.134 17:16:17 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:58.134 17:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # IFS=: 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # read -r var val 00:06:58.134 17:16:17 -- accel/accel.sh@21 -- # val=32 00:06:58.134 17:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # IFS=: 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # read -r var val 00:06:58.134 17:16:17 -- accel/accel.sh@21 -- # val=32 00:06:58.134 17:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # IFS=: 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # read -r var val 00:06:58.134 17:16:17 -- accel/accel.sh@21 -- # val=1 00:06:58.134 17:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # IFS=: 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # read -r var val 00:06:58.134 17:16:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.134 17:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # IFS=: 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # read -r var val 00:06:58.134 17:16:17 -- accel/accel.sh@21 -- # val=Yes 00:06:58.134 17:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # IFS=: 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # read -r var val 00:06:58.134 17:16:17 -- accel/accel.sh@21 -- # val= 00:06:58.134 17:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # IFS=: 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # read -r var val 00:06:58.134 17:16:17 -- accel/accel.sh@21 -- # val= 00:06:58.134 17:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # IFS=: 00:06:58.134 17:16:17 -- accel/accel.sh@20 -- # read -r var val 00:06:59.094 17:16:18 -- accel/accel.sh@21 -- # val= 00:06:59.094 17:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.094 17:16:18 -- accel/accel.sh@20 -- # IFS=: 00:06:59.094 17:16:18 -- accel/accel.sh@20 -- # read -r var val 00:06:59.094 17:16:18 -- accel/accel.sh@21 -- # val= 00:06:59.094 17:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.094 17:16:18 -- accel/accel.sh@20 -- # IFS=: 00:06:59.094 17:16:18 -- accel/accel.sh@20 -- # read -r var val 00:06:59.094 17:16:18 -- accel/accel.sh@21 -- # val= 00:06:59.094 17:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.094 17:16:18 -- accel/accel.sh@20 -- # IFS=: 00:06:59.094 17:16:18 -- accel/accel.sh@20 -- # read -r var val 00:06:59.094 17:16:18 -- accel/accel.sh@21 -- # val= 00:06:59.094 17:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.094 17:16:18 -- accel/accel.sh@20 -- # IFS=: 00:06:59.094 17:16:18 -- accel/accel.sh@20 -- # read -r var val 00:06:59.094 17:16:18 -- accel/accel.sh@21 -- # val= 00:06:59.094 17:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.094 17:16:18 -- accel/accel.sh@20 -- # IFS=: 00:06:59.094 17:16:18 -- accel/accel.sh@20 -- # read -r var val 00:06:59.094 17:16:18 -- accel/accel.sh@21 -- # val= 00:06:59.094 17:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.094 17:16:18 -- accel/accel.sh@20 -- # IFS=: 00:06:59.094 17:16:18 -- accel/accel.sh@20 -- # read -r var val 00:06:59.094 17:16:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:59.094 17:16:18 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:59.094 17:16:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.094 00:06:59.094 real 0m2.666s 00:06:59.094 user 0m2.419s 00:06:59.094 sys 0m0.258s 00:06:59.094 17:16:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.094 17:16:18 -- common/autotest_common.sh@10 -- # set +x 00:06:59.094 ************************************ 00:06:59.094 END TEST accel_decomp 00:06:59.094 ************************************ 00:06:59.094 17:16:18 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:59.094 17:16:18 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:59.094 17:16:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.094 17:16:18 -- common/autotest_common.sh@10 -- # set +x 00:06:59.094 ************************************ 00:06:59.094 START TEST accel_decmop_full 00:06:59.095 ************************************ 00:06:59.095 17:16:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:59.095 17:16:18 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.095 17:16:18 -- accel/accel.sh@17 -- # local accel_module 00:06:59.095 17:16:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:59.095 17:16:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:59.095 17:16:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.095 17:16:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.095 17:16:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.095 17:16:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.095 17:16:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.095 17:16:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.095 17:16:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.095 17:16:18 -- accel/accel.sh@42 -- # jq -r . 00:06:59.095 [2024-11-09 17:16:18.851798] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.095 [2024-11-09 17:16:18.851874] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid361850 ] 00:06:59.353 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.353 [2024-11-09 17:16:18.920213] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.353 [2024-11-09 17:16:18.988414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.732 17:16:20 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:00.732 00:07:00.732 SPDK Configuration: 00:07:00.732 Core mask: 0x1 00:07:00.732 00:07:00.732 Accel Perf Configuration: 00:07:00.732 Workload Type: decompress 00:07:00.732 Transfer size: 111250 bytes 00:07:00.732 Vector count 1 00:07:00.732 Module: software 00:07:00.732 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:00.732 Queue depth: 32 00:07:00.732 Allocate depth: 32 00:07:00.732 # threads/core: 1 00:07:00.732 Run time: 1 seconds 00:07:00.732 Verify: Yes 00:07:00.732 00:07:00.732 Running for 1 seconds... 00:07:00.732 00:07:00.732 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.732 ------------------------------------------------------------------------------------ 00:07:00.732 0,0 5888/s 243 MiB/s 0 0 00:07:00.732 ==================================================================================== 00:07:00.732 Total 5888/s 624 MiB/s 0 0' 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:00.732 17:16:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:00.732 17:16:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:00.732 17:16:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.732 17:16:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.732 17:16:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.732 17:16:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.732 17:16:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.732 17:16:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.732 17:16:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.732 17:16:20 -- accel/accel.sh@42 -- # jq -r . 00:07:00.732 [2024-11-09 17:16:20.190997] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.732 [2024-11-09 17:16:20.191091] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid362111 ] 00:07:00.732 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.732 [2024-11-09 17:16:20.260280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.732 [2024-11-09 17:16:20.326228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.732 17:16:20 -- accel/accel.sh@21 -- # val= 00:07:00.732 17:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:00.732 17:16:20 -- accel/accel.sh@21 -- # val= 00:07:00.732 17:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:00.732 17:16:20 -- accel/accel.sh@21 -- # val= 00:07:00.732 17:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:00.732 17:16:20 -- accel/accel.sh@21 -- # val=0x1 00:07:00.732 17:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:00.732 17:16:20 -- accel/accel.sh@21 -- # val= 00:07:00.732 17:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:00.732 17:16:20 -- accel/accel.sh@21 -- # val= 00:07:00.732 17:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:00.732 17:16:20 -- accel/accel.sh@21 -- # val=decompress 00:07:00.732 17:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.732 17:16:20 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:00.732 17:16:20 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:00.732 17:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:00.732 17:16:20 -- accel/accel.sh@21 -- # val= 00:07:00.732 17:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:00.732 17:16:20 -- accel/accel.sh@21 -- # val=software 00:07:00.732 17:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.732 17:16:20 -- accel/accel.sh@23 -- # accel_module=software 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:00.732 17:16:20 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:00.732 17:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:00.732 17:16:20 -- accel/accel.sh@21 -- # val=32 00:07:00.732 17:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:00.732 17:16:20 -- accel/accel.sh@21 -- # val=32 00:07:00.732 17:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:00.732 17:16:20 -- accel/accel.sh@21 -- # val=1 00:07:00.732 17:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:00.732 17:16:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:00.732 17:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:00.732 17:16:20 -- accel/accel.sh@21 -- # val=Yes 00:07:00.732 17:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:00.732 17:16:20 -- accel/accel.sh@21 -- # val= 00:07:00.732 17:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:00.732 17:16:20 -- accel/accel.sh@21 -- # val= 00:07:00.732 17:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:00.732 17:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:02.112 17:16:21 -- accel/accel.sh@21 -- # val= 00:07:02.112 17:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.112 17:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:02.112 17:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:02.112 17:16:21 -- accel/accel.sh@21 -- # val= 00:07:02.112 17:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.112 17:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:02.112 17:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:02.112 17:16:21 -- accel/accel.sh@21 -- # val= 00:07:02.112 17:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.112 17:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:02.112 17:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:02.112 17:16:21 -- accel/accel.sh@21 -- # val= 00:07:02.112 17:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.112 17:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:02.112 17:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:02.112 17:16:21 -- accel/accel.sh@21 -- # val= 00:07:02.112 17:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.112 17:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:02.112 17:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:02.112 17:16:21 -- accel/accel.sh@21 -- # val= 00:07:02.112 17:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.112 17:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:02.112 17:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:02.112 17:16:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:02.112 17:16:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:02.112 17:16:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.112 00:07:02.112 real 0m2.677s 00:07:02.112 user 0m2.431s 00:07:02.112 sys 0m0.243s 00:07:02.112 17:16:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:02.112 17:16:21 -- common/autotest_common.sh@10 -- # set +x 00:07:02.112 ************************************ 00:07:02.112 END TEST accel_decmop_full 00:07:02.112 ************************************ 00:07:02.112 17:16:21 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:02.112 17:16:21 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:02.112 17:16:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.112 17:16:21 -- common/autotest_common.sh@10 -- # set +x 00:07:02.112 ************************************ 00:07:02.112 START TEST accel_decomp_mcore 00:07:02.112 ************************************ 00:07:02.112 17:16:21 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:02.112 17:16:21 -- accel/accel.sh@16 -- # local accel_opc 00:07:02.112 17:16:21 -- accel/accel.sh@17 -- # local accel_module 00:07:02.112 17:16:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:02.112 17:16:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:02.112 17:16:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.112 17:16:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.112 17:16:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.112 17:16:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.112 17:16:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.112 17:16:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.112 17:16:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.112 17:16:21 -- accel/accel.sh@42 -- # jq -r . 00:07:02.112 [2024-11-09 17:16:21.568164] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.112 [2024-11-09 17:16:21.568255] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid362401 ] 00:07:02.112 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.112 [2024-11-09 17:16:21.637646] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:02.112 [2024-11-09 17:16:21.707288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.112 [2024-11-09 17:16:21.707384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.112 [2024-11-09 17:16:21.707472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:02.112 [2024-11-09 17:16:21.707474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.492 17:16:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:03.492 00:07:03.492 SPDK Configuration: 00:07:03.492 Core mask: 0xf 00:07:03.492 00:07:03.492 Accel Perf Configuration: 00:07:03.492 Workload Type: decompress 00:07:03.492 Transfer size: 4096 bytes 00:07:03.492 Vector count 1 00:07:03.492 Module: software 00:07:03.492 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:03.492 Queue depth: 32 00:07:03.492 Allocate depth: 32 00:07:03.492 # threads/core: 1 00:07:03.492 Run time: 1 seconds 00:07:03.492 Verify: Yes 00:07:03.492 00:07:03.493 Running for 1 seconds... 00:07:03.493 00:07:03.493 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:03.493 ------------------------------------------------------------------------------------ 00:07:03.493 0,0 76160/s 140 MiB/s 0 0 00:07:03.493 3,0 76416/s 140 MiB/s 0 0 00:07:03.493 2,0 76000/s 140 MiB/s 0 0 00:07:03.493 1,0 76288/s 140 MiB/s 0 0 00:07:03.493 ==================================================================================== 00:07:03.493 Total 304864/s 1190 MiB/s 0 0' 00:07:03.493 17:16:22 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 17:16:22 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 17:16:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:03.493 17:16:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:03.493 17:16:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.493 17:16:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.493 17:16:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.493 17:16:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.493 17:16:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.493 17:16:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.493 17:16:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.493 17:16:22 -- accel/accel.sh@42 -- # jq -r . 00:07:03.493 [2024-11-09 17:16:22.906349] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.493 [2024-11-09 17:16:22.906440] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid362675 ] 00:07:03.493 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.493 [2024-11-09 17:16:22.974389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:03.493 [2024-11-09 17:16:23.043126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.493 [2024-11-09 17:16:23.043222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.493 [2024-11-09 17:16:23.043296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.493 [2024-11-09 17:16:23.043298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.493 17:16:23 -- accel/accel.sh@21 -- # val= 00:07:03.493 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 17:16:23 -- accel/accel.sh@21 -- # val= 00:07:03.493 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 17:16:23 -- accel/accel.sh@21 -- # val= 00:07:03.493 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 17:16:23 -- accel/accel.sh@21 -- # val=0xf 00:07:03.493 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 17:16:23 -- accel/accel.sh@21 -- # val= 00:07:03.493 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 17:16:23 -- accel/accel.sh@21 -- # val= 00:07:03.493 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 17:16:23 -- accel/accel.sh@21 -- # val=decompress 00:07:03.493 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 17:16:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 17:16:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.493 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 17:16:23 -- accel/accel.sh@21 -- # val= 00:07:03.493 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 17:16:23 -- accel/accel.sh@21 -- # val=software 00:07:03.493 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 17:16:23 -- accel/accel.sh@23 -- # accel_module=software 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 17:16:23 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:03.493 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 17:16:23 -- accel/accel.sh@21 -- # val=32 00:07:03.493 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 17:16:23 -- accel/accel.sh@21 -- # val=32 00:07:03.493 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 17:16:23 -- accel/accel.sh@21 -- # val=1 00:07:03.493 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 17:16:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:03.493 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 17:16:23 -- accel/accel.sh@21 -- # val=Yes 00:07:03.493 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 17:16:23 -- accel/accel.sh@21 -- # val= 00:07:03.493 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 17:16:23 -- accel/accel.sh@21 -- # val= 00:07:03.493 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.872 17:16:24 -- accel/accel.sh@21 -- # val= 00:07:04.872 17:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.872 17:16:24 -- accel/accel.sh@20 -- # IFS=: 00:07:04.872 17:16:24 -- accel/accel.sh@20 -- # read -r var val 00:07:04.872 17:16:24 -- accel/accel.sh@21 -- # val= 00:07:04.872 17:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.872 17:16:24 -- accel/accel.sh@20 -- # IFS=: 00:07:04.872 17:16:24 -- accel/accel.sh@20 -- # read -r var val 00:07:04.872 17:16:24 -- accel/accel.sh@21 -- # val= 00:07:04.872 17:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.872 17:16:24 -- accel/accel.sh@20 -- # IFS=: 00:07:04.872 17:16:24 -- accel/accel.sh@20 -- # read -r var val 00:07:04.872 17:16:24 -- accel/accel.sh@21 -- # val= 00:07:04.872 17:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.872 17:16:24 -- accel/accel.sh@20 -- # IFS=: 00:07:04.872 17:16:24 -- accel/accel.sh@20 -- # read -r var val 00:07:04.872 17:16:24 -- accel/accel.sh@21 -- # val= 00:07:04.872 17:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.872 17:16:24 -- accel/accel.sh@20 -- # IFS=: 00:07:04.872 17:16:24 -- accel/accel.sh@20 -- # read -r var val 00:07:04.872 17:16:24 -- accel/accel.sh@21 -- # val= 00:07:04.872 17:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.872 17:16:24 -- accel/accel.sh@20 -- # IFS=: 00:07:04.872 17:16:24 -- accel/accel.sh@20 -- # read -r var val 00:07:04.872 17:16:24 -- accel/accel.sh@21 -- # val= 00:07:04.872 17:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.872 17:16:24 -- accel/accel.sh@20 -- # IFS=: 00:07:04.872 17:16:24 -- accel/accel.sh@20 -- # read -r var val 00:07:04.872 17:16:24 -- accel/accel.sh@21 -- # val= 00:07:04.872 17:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.872 17:16:24 -- accel/accel.sh@20 -- # IFS=: 00:07:04.872 17:16:24 -- accel/accel.sh@20 -- # read -r var val 00:07:04.872 17:16:24 -- accel/accel.sh@21 -- # val= 00:07:04.872 17:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.872 17:16:24 -- accel/accel.sh@20 -- # IFS=: 00:07:04.872 17:16:24 -- accel/accel.sh@20 -- # read -r var val 00:07:04.872 17:16:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:04.873 17:16:24 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:04.873 17:16:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.873 00:07:04.873 real 0m2.686s 00:07:04.873 user 0m9.090s 00:07:04.873 sys 0m0.264s 00:07:04.873 17:16:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:04.873 17:16:24 -- common/autotest_common.sh@10 -- # set +x 00:07:04.873 ************************************ 00:07:04.873 END TEST accel_decomp_mcore 00:07:04.873 ************************************ 00:07:04.873 17:16:24 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.873 17:16:24 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:04.873 17:16:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.873 17:16:24 -- common/autotest_common.sh@10 -- # set +x 00:07:04.873 ************************************ 00:07:04.873 START TEST accel_decomp_full_mcore 00:07:04.873 ************************************ 00:07:04.873 17:16:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.873 17:16:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.873 17:16:24 -- accel/accel.sh@17 -- # local accel_module 00:07:04.873 17:16:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.873 17:16:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.873 17:16:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.873 17:16:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.873 17:16:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.873 17:16:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.873 17:16:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.873 17:16:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.873 17:16:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.873 17:16:24 -- accel/accel.sh@42 -- # jq -r . 00:07:04.873 [2024-11-09 17:16:24.304952] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:04.873 [2024-11-09 17:16:24.305040] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid362961 ] 00:07:04.873 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.873 [2024-11-09 17:16:24.373812] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:04.873 [2024-11-09 17:16:24.443857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.873 [2024-11-09 17:16:24.443955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.873 [2024-11-09 17:16:24.444018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.873 [2024-11-09 17:16:24.444020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.252 17:16:25 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:06.252 00:07:06.252 SPDK Configuration: 00:07:06.253 Core mask: 0xf 00:07:06.253 00:07:06.253 Accel Perf Configuration: 00:07:06.253 Workload Type: decompress 00:07:06.253 Transfer size: 111250 bytes 00:07:06.253 Vector count 1 00:07:06.253 Module: software 00:07:06.253 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:06.253 Queue depth: 32 00:07:06.253 Allocate depth: 32 00:07:06.253 # threads/core: 1 00:07:06.253 Run time: 1 seconds 00:07:06.253 Verify: Yes 00:07:06.253 00:07:06.253 Running for 1 seconds... 00:07:06.253 00:07:06.253 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:06.253 ------------------------------------------------------------------------------------ 00:07:06.253 0,0 5792/s 239 MiB/s 0 0 00:07:06.253 3,0 5824/s 240 MiB/s 0 0 00:07:06.253 2,0 5824/s 240 MiB/s 0 0 00:07:06.253 1,0 5824/s 240 MiB/s 0 0 00:07:06.253 ==================================================================================== 00:07:06.253 Total 23264/s 2468 MiB/s 0 0' 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:06.253 17:16:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:06.253 17:16:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:06.253 17:16:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.253 17:16:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.253 17:16:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.253 17:16:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.253 17:16:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.253 17:16:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.253 17:16:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.253 17:16:25 -- accel/accel.sh@42 -- # jq -r . 00:07:06.253 [2024-11-09 17:16:25.653627] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.253 [2024-11-09 17:16:25.653724] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid363236 ] 00:07:06.253 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.253 [2024-11-09 17:16:25.723327] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:06.253 [2024-11-09 17:16:25.792045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.253 [2024-11-09 17:16:25.792142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.253 [2024-11-09 17:16:25.792250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:06.253 [2024-11-09 17:16:25.792252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.253 17:16:25 -- accel/accel.sh@21 -- # val= 00:07:06.253 17:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:06.253 17:16:25 -- accel/accel.sh@21 -- # val= 00:07:06.253 17:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:06.253 17:16:25 -- accel/accel.sh@21 -- # val= 00:07:06.253 17:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:06.253 17:16:25 -- accel/accel.sh@21 -- # val=0xf 00:07:06.253 17:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:06.253 17:16:25 -- accel/accel.sh@21 -- # val= 00:07:06.253 17:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:06.253 17:16:25 -- accel/accel.sh@21 -- # val= 00:07:06.253 17:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:06.253 17:16:25 -- accel/accel.sh@21 -- # val=decompress 00:07:06.253 17:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.253 17:16:25 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:06.253 17:16:25 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:06.253 17:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:06.253 17:16:25 -- accel/accel.sh@21 -- # val= 00:07:06.253 17:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:06.253 17:16:25 -- accel/accel.sh@21 -- # val=software 00:07:06.253 17:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.253 17:16:25 -- accel/accel.sh@23 -- # accel_module=software 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:06.253 17:16:25 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:06.253 17:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:06.253 17:16:25 -- accel/accel.sh@21 -- # val=32 00:07:06.253 17:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:06.253 17:16:25 -- accel/accel.sh@21 -- # val=32 00:07:06.253 17:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:06.253 17:16:25 -- accel/accel.sh@21 -- # val=1 00:07:06.253 17:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:06.253 17:16:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:06.253 17:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:06.253 17:16:25 -- accel/accel.sh@21 -- # val=Yes 00:07:06.253 17:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:06.253 17:16:25 -- accel/accel.sh@21 -- # val= 00:07:06.253 17:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:06.253 17:16:25 -- accel/accel.sh@21 -- # val= 00:07:06.253 17:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:06.253 17:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:07.634 17:16:26 -- accel/accel.sh@21 -- # val= 00:07:07.634 17:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.634 17:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.634 17:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.634 17:16:26 -- accel/accel.sh@21 -- # val= 00:07:07.634 17:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.634 17:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.634 17:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.634 17:16:26 -- accel/accel.sh@21 -- # val= 00:07:07.634 17:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.634 17:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.634 17:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.634 17:16:26 -- accel/accel.sh@21 -- # val= 00:07:07.634 17:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.634 17:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.634 17:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.634 17:16:26 -- accel/accel.sh@21 -- # val= 00:07:07.634 17:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.634 17:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.634 17:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.634 17:16:26 -- accel/accel.sh@21 -- # val= 00:07:07.634 17:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.634 17:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.634 17:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.634 17:16:26 -- accel/accel.sh@21 -- # val= 00:07:07.634 17:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.634 17:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.634 17:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.634 17:16:26 -- accel/accel.sh@21 -- # val= 00:07:07.634 17:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.634 17:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.634 17:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.634 17:16:26 -- accel/accel.sh@21 -- # val= 00:07:07.634 17:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.634 17:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.634 17:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.634 17:16:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:07.634 17:16:26 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:07.634 17:16:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.634 00:07:07.634 real 0m2.707s 00:07:07.634 user 0m9.141s 00:07:07.634 sys 0m0.276s 00:07:07.634 17:16:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:07.634 17:16:26 -- common/autotest_common.sh@10 -- # set +x 00:07:07.634 ************************************ 00:07:07.634 END TEST accel_decomp_full_mcore 00:07:07.634 ************************************ 00:07:07.634 17:16:27 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:07.634 17:16:27 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:07.634 17:16:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.634 17:16:27 -- common/autotest_common.sh@10 -- # set +x 00:07:07.634 ************************************ 00:07:07.634 START TEST accel_decomp_mthread 00:07:07.634 ************************************ 00:07:07.634 17:16:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:07.634 17:16:27 -- accel/accel.sh@16 -- # local accel_opc 00:07:07.634 17:16:27 -- accel/accel.sh@17 -- # local accel_module 00:07:07.634 17:16:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:07.634 17:16:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:07.634 17:16:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.634 17:16:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.634 17:16:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.634 17:16:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.634 17:16:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.634 17:16:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.634 17:16:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.634 17:16:27 -- accel/accel.sh@42 -- # jq -r . 00:07:07.634 [2024-11-09 17:16:27.058292] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:07.634 [2024-11-09 17:16:27.058375] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid363487 ] 00:07:07.634 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.635 [2024-11-09 17:16:27.126626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.635 [2024-11-09 17:16:27.194003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.014 17:16:28 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:09.014 00:07:09.014 SPDK Configuration: 00:07:09.014 Core mask: 0x1 00:07:09.014 00:07:09.015 Accel Perf Configuration: 00:07:09.015 Workload Type: decompress 00:07:09.015 Transfer size: 4096 bytes 00:07:09.015 Vector count 1 00:07:09.015 Module: software 00:07:09.015 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:09.015 Queue depth: 32 00:07:09.015 Allocate depth: 32 00:07:09.015 # threads/core: 2 00:07:09.015 Run time: 1 seconds 00:07:09.015 Verify: Yes 00:07:09.015 00:07:09.015 Running for 1 seconds... 00:07:09.015 00:07:09.015 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:09.015 ------------------------------------------------------------------------------------ 00:07:09.015 0,1 46976/s 86 MiB/s 0 0 00:07:09.015 0,0 46816/s 86 MiB/s 0 0 00:07:09.015 ==================================================================================== 00:07:09.015 Total 93792/s 366 MiB/s 0 0' 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # IFS=: 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # read -r var val 00:07:09.015 17:16:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:09.015 17:16:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:09.015 17:16:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.015 17:16:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.015 17:16:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.015 17:16:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.015 17:16:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.015 17:16:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.015 17:16:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.015 17:16:28 -- accel/accel.sh@42 -- # jq -r . 00:07:09.015 [2024-11-09 17:16:28.390825] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.015 [2024-11-09 17:16:28.390917] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid363651 ] 00:07:09.015 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.015 [2024-11-09 17:16:28.462035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.015 [2024-11-09 17:16:28.528268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.015 17:16:28 -- accel/accel.sh@21 -- # val= 00:07:09.015 17:16:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # IFS=: 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # read -r var val 00:07:09.015 17:16:28 -- accel/accel.sh@21 -- # val= 00:07:09.015 17:16:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # IFS=: 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # read -r var val 00:07:09.015 17:16:28 -- accel/accel.sh@21 -- # val= 00:07:09.015 17:16:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # IFS=: 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # read -r var val 00:07:09.015 17:16:28 -- accel/accel.sh@21 -- # val=0x1 00:07:09.015 17:16:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # IFS=: 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # read -r var val 00:07:09.015 17:16:28 -- accel/accel.sh@21 -- # val= 00:07:09.015 17:16:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # IFS=: 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # read -r var val 00:07:09.015 17:16:28 -- accel/accel.sh@21 -- # val= 00:07:09.015 17:16:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # IFS=: 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # read -r var val 00:07:09.015 17:16:28 -- accel/accel.sh@21 -- # val=decompress 00:07:09.015 17:16:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.015 17:16:28 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # IFS=: 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # read -r var val 00:07:09.015 17:16:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:09.015 17:16:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # IFS=: 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # read -r var val 00:07:09.015 17:16:28 -- accel/accel.sh@21 -- # val= 00:07:09.015 17:16:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # IFS=: 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # read -r var val 00:07:09.015 17:16:28 -- accel/accel.sh@21 -- # val=software 00:07:09.015 17:16:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.015 17:16:28 -- accel/accel.sh@23 -- # accel_module=software 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # IFS=: 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # read -r var val 00:07:09.015 17:16:28 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:09.015 17:16:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # IFS=: 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # read -r var val 00:07:09.015 17:16:28 -- accel/accel.sh@21 -- # val=32 00:07:09.015 17:16:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # IFS=: 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # read -r var val 00:07:09.015 17:16:28 -- accel/accel.sh@21 -- # val=32 00:07:09.015 17:16:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # IFS=: 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # read -r var val 00:07:09.015 17:16:28 -- accel/accel.sh@21 -- # val=2 00:07:09.015 17:16:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # IFS=: 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # read -r var val 00:07:09.015 17:16:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:09.015 17:16:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # IFS=: 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # read -r var val 00:07:09.015 17:16:28 -- accel/accel.sh@21 -- # val=Yes 00:07:09.015 17:16:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # IFS=: 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # read -r var val 00:07:09.015 17:16:28 -- accel/accel.sh@21 -- # val= 00:07:09.015 17:16:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # IFS=: 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # read -r var val 00:07:09.015 17:16:28 -- accel/accel.sh@21 -- # val= 00:07:09.015 17:16:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # IFS=: 00:07:09.015 17:16:28 -- accel/accel.sh@20 -- # read -r var val 00:07:09.954 17:16:29 -- accel/accel.sh@21 -- # val= 00:07:09.954 17:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.954 17:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.954 17:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.954 17:16:29 -- accel/accel.sh@21 -- # val= 00:07:09.954 17:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.954 17:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.954 17:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.954 17:16:29 -- accel/accel.sh@21 -- # val= 00:07:09.954 17:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.954 17:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.954 17:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.954 17:16:29 -- accel/accel.sh@21 -- # val= 00:07:09.954 17:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.954 17:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.954 17:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.954 17:16:29 -- accel/accel.sh@21 -- # val= 00:07:09.954 17:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.954 17:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.954 17:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.954 17:16:29 -- accel/accel.sh@21 -- # val= 00:07:09.954 17:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.954 17:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.954 17:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.954 17:16:29 -- accel/accel.sh@21 -- # val= 00:07:09.954 17:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.954 17:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.954 17:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.954 17:16:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:09.954 17:16:29 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:09.954 17:16:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.954 00:07:09.954 real 0m2.672s 00:07:09.954 user 0m2.410s 00:07:09.954 sys 0m0.272s 00:07:09.954 17:16:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.954 17:16:29 -- common/autotest_common.sh@10 -- # set +x 00:07:09.954 ************************************ 00:07:09.954 END TEST accel_decomp_mthread 00:07:09.954 ************************************ 00:07:10.213 17:16:29 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:10.213 17:16:29 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:10.213 17:16:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.213 17:16:29 -- common/autotest_common.sh@10 -- # set +x 00:07:10.213 ************************************ 00:07:10.213 START TEST accel_deomp_full_mthread 00:07:10.213 ************************************ 00:07:10.213 17:16:29 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:10.213 17:16:29 -- accel/accel.sh@16 -- # local accel_opc 00:07:10.213 17:16:29 -- accel/accel.sh@17 -- # local accel_module 00:07:10.213 17:16:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:10.213 17:16:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:10.213 17:16:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.213 17:16:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.213 17:16:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.213 17:16:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.213 17:16:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.213 17:16:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.213 17:16:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.213 17:16:29 -- accel/accel.sh@42 -- # jq -r . 00:07:10.213 [2024-11-09 17:16:29.778643] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.213 [2024-11-09 17:16:29.778736] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid363861 ] 00:07:10.213 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.213 [2024-11-09 17:16:29.846754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.213 [2024-11-09 17:16:29.916375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.590 17:16:31 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:11.590 00:07:11.590 SPDK Configuration: 00:07:11.590 Core mask: 0x1 00:07:11.590 00:07:11.590 Accel Perf Configuration: 00:07:11.590 Workload Type: decompress 00:07:11.590 Transfer size: 111250 bytes 00:07:11.590 Vector count 1 00:07:11.590 Module: software 00:07:11.590 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:11.590 Queue depth: 32 00:07:11.590 Allocate depth: 32 00:07:11.590 # threads/core: 2 00:07:11.590 Run time: 1 seconds 00:07:11.590 Verify: Yes 00:07:11.590 00:07:11.590 Running for 1 seconds... 00:07:11.590 00:07:11.590 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:11.590 ------------------------------------------------------------------------------------ 00:07:11.590 0,1 2848/s 117 MiB/s 0 0 00:07:11.590 0,0 2848/s 117 MiB/s 0 0 00:07:11.590 ==================================================================================== 00:07:11.590 Total 5696/s 604 MiB/s 0 0' 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:11.590 17:16:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:11.590 17:16:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:11.590 17:16:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.590 17:16:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.590 17:16:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.590 17:16:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.590 17:16:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.590 17:16:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.590 17:16:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.590 17:16:31 -- accel/accel.sh@42 -- # jq -r . 00:07:11.590 [2024-11-09 17:16:31.131373] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:11.590 [2024-11-09 17:16:31.131469] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid364101 ] 00:07:11.590 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.590 [2024-11-09 17:16:31.200610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.590 [2024-11-09 17:16:31.271778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.590 17:16:31 -- accel/accel.sh@21 -- # val= 00:07:11.590 17:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:11.590 17:16:31 -- accel/accel.sh@21 -- # val= 00:07:11.590 17:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:11.590 17:16:31 -- accel/accel.sh@21 -- # val= 00:07:11.590 17:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:11.590 17:16:31 -- accel/accel.sh@21 -- # val=0x1 00:07:11.590 17:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:11.590 17:16:31 -- accel/accel.sh@21 -- # val= 00:07:11.590 17:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:11.590 17:16:31 -- accel/accel.sh@21 -- # val= 00:07:11.590 17:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:11.590 17:16:31 -- accel/accel.sh@21 -- # val=decompress 00:07:11.590 17:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.590 17:16:31 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:11.590 17:16:31 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:11.590 17:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:11.590 17:16:31 -- accel/accel.sh@21 -- # val= 00:07:11.590 17:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:11.590 17:16:31 -- accel/accel.sh@21 -- # val=software 00:07:11.590 17:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.590 17:16:31 -- accel/accel.sh@23 -- # accel_module=software 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:11.590 17:16:31 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:11.590 17:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:11.590 17:16:31 -- accel/accel.sh@21 -- # val=32 00:07:11.590 17:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:11.590 17:16:31 -- accel/accel.sh@21 -- # val=32 00:07:11.590 17:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:11.590 17:16:31 -- accel/accel.sh@21 -- # val=2 00:07:11.590 17:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:11.590 17:16:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:11.590 17:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:11.590 17:16:31 -- accel/accel.sh@21 -- # val=Yes 00:07:11.590 17:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:11.590 17:16:31 -- accel/accel.sh@21 -- # val= 00:07:11.590 17:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:11.590 17:16:31 -- accel/accel.sh@21 -- # val= 00:07:11.590 17:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:11.590 17:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:12.970 17:16:32 -- accel/accel.sh@21 -- # val= 00:07:12.970 17:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.970 17:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:12.970 17:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:12.970 17:16:32 -- accel/accel.sh@21 -- # val= 00:07:12.970 17:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.970 17:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:12.970 17:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:12.970 17:16:32 -- accel/accel.sh@21 -- # val= 00:07:12.970 17:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.970 17:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:12.970 17:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:12.970 17:16:32 -- accel/accel.sh@21 -- # val= 00:07:12.970 17:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.970 17:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:12.970 17:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:12.970 17:16:32 -- accel/accel.sh@21 -- # val= 00:07:12.970 17:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.970 17:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:12.970 17:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:12.970 17:16:32 -- accel/accel.sh@21 -- # val= 00:07:12.970 17:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.970 17:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:12.970 17:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:12.970 17:16:32 -- accel/accel.sh@21 -- # val= 00:07:12.970 17:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.970 17:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:12.970 17:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:12.970 17:16:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:12.970 17:16:32 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:12.970 17:16:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.970 00:07:12.970 real 0m2.711s 00:07:12.970 user 0m2.455s 00:07:12.970 sys 0m0.263s 00:07:12.970 17:16:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.970 17:16:32 -- common/autotest_common.sh@10 -- # set +x 00:07:12.970 ************************************ 00:07:12.970 END TEST accel_deomp_full_mthread 00:07:12.970 ************************************ 00:07:12.970 17:16:32 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:12.970 17:16:32 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:12.970 17:16:32 -- accel/accel.sh@129 -- # build_accel_config 00:07:12.970 17:16:32 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:12.970 17:16:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.970 17:16:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.970 17:16:32 -- common/autotest_common.sh@10 -- # set +x 00:07:12.970 17:16:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.970 17:16:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.970 17:16:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.970 17:16:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.970 17:16:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.970 17:16:32 -- accel/accel.sh@42 -- # jq -r . 00:07:12.970 ************************************ 00:07:12.970 START TEST accel_dif_functional_tests 00:07:12.970 ************************************ 00:07:12.970 17:16:32 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:12.970 [2024-11-09 17:16:32.541389] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:12.970 [2024-11-09 17:16:32.541484] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid364383 ] 00:07:12.970 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.970 [2024-11-09 17:16:32.607985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:12.970 [2024-11-09 17:16:32.676731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.970 [2024-11-09 17:16:32.676828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.970 [2024-11-09 17:16:32.676828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.229 00:07:13.229 00:07:13.229 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.229 http://cunit.sourceforge.net/ 00:07:13.229 00:07:13.229 00:07:13.229 Suite: accel_dif 00:07:13.229 Test: verify: DIF generated, GUARD check ...passed 00:07:13.229 Test: verify: DIF generated, APPTAG check ...passed 00:07:13.229 Test: verify: DIF generated, REFTAG check ...passed 00:07:13.229 Test: verify: DIF not generated, GUARD check ...[2024-11-09 17:16:32.745716] dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:13.229 [2024-11-09 17:16:32.745773] dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:13.229 passed 00:07:13.229 Test: verify: DIF not generated, APPTAG check ...[2024-11-09 17:16:32.745810] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:13.229 [2024-11-09 17:16:32.745830] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:13.229 passed 00:07:13.229 Test: verify: DIF not generated, REFTAG check ...[2024-11-09 17:16:32.745853] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:13.229 [2024-11-09 17:16:32.745871] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:13.229 passed 00:07:13.229 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:13.229 Test: verify: APPTAG incorrect, APPTAG check ...[2024-11-09 17:16:32.745917] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:13.229 passed 00:07:13.229 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:13.229 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:13.229 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:13.229 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-11-09 17:16:32.746018] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:13.229 passed 00:07:13.229 Test: generate copy: DIF generated, GUARD check ...passed 00:07:13.229 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:13.230 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:13.230 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:13.230 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:13.230 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:13.230 Test: generate copy: iovecs-len validate ...[2024-11-09 17:16:32.746199] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:13.230 passed 00:07:13.230 Test: generate copy: buffer alignment validate ...passed 00:07:13.230 00:07:13.230 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.230 suites 1 1 n/a 0 0 00:07:13.230 tests 20 20 20 0 0 00:07:13.230 asserts 204 204 204 0 n/a 00:07:13.230 00:07:13.230 Elapsed time = 0.000 seconds 00:07:13.230 00:07:13.230 real 0m0.387s 00:07:13.230 user 0m0.590s 00:07:13.230 sys 0m0.148s 00:07:13.230 17:16:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.230 17:16:32 -- common/autotest_common.sh@10 -- # set +x 00:07:13.230 ************************************ 00:07:13.230 END TEST accel_dif_functional_tests 00:07:13.230 ************************************ 00:07:13.230 00:07:13.230 real 0m57.197s 00:07:13.230 user 1m4.837s 00:07:13.230 sys 0m7.110s 00:07:13.230 17:16:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.230 17:16:32 -- common/autotest_common.sh@10 -- # set +x 00:07:13.230 ************************************ 00:07:13.230 END TEST accel 00:07:13.230 ************************************ 00:07:13.230 17:16:32 -- spdk/autotest.sh@177 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:13.230 17:16:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:13.230 17:16:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.230 17:16:32 -- common/autotest_common.sh@10 -- # set +x 00:07:13.230 ************************************ 00:07:13.230 START TEST accel_rpc 00:07:13.230 ************************************ 00:07:13.230 17:16:32 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:13.489 * Looking for test storage... 00:07:13.489 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:07:13.489 17:16:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:13.489 17:16:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:13.489 17:16:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:13.489 17:16:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:13.489 17:16:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:13.490 17:16:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:13.490 17:16:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:13.490 17:16:33 -- scripts/common.sh@335 -- # IFS=.-: 00:07:13.490 17:16:33 -- scripts/common.sh@335 -- # read -ra ver1 00:07:13.490 17:16:33 -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.490 17:16:33 -- scripts/common.sh@336 -- # read -ra ver2 00:07:13.490 17:16:33 -- scripts/common.sh@337 -- # local 'op=<' 00:07:13.490 17:16:33 -- scripts/common.sh@339 -- # ver1_l=2 00:07:13.490 17:16:33 -- scripts/common.sh@340 -- # ver2_l=1 00:07:13.490 17:16:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:13.490 17:16:33 -- scripts/common.sh@343 -- # case "$op" in 00:07:13.490 17:16:33 -- scripts/common.sh@344 -- # : 1 00:07:13.490 17:16:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:13.490 17:16:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.490 17:16:33 -- scripts/common.sh@364 -- # decimal 1 00:07:13.490 17:16:33 -- scripts/common.sh@352 -- # local d=1 00:07:13.490 17:16:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.490 17:16:33 -- scripts/common.sh@354 -- # echo 1 00:07:13.490 17:16:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:13.490 17:16:33 -- scripts/common.sh@365 -- # decimal 2 00:07:13.490 17:16:33 -- scripts/common.sh@352 -- # local d=2 00:07:13.490 17:16:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.490 17:16:33 -- scripts/common.sh@354 -- # echo 2 00:07:13.490 17:16:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:13.490 17:16:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:13.490 17:16:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:13.490 17:16:33 -- scripts/common.sh@367 -- # return 0 00:07:13.490 17:16:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.490 17:16:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:13.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.490 --rc genhtml_branch_coverage=1 00:07:13.490 --rc genhtml_function_coverage=1 00:07:13.490 --rc genhtml_legend=1 00:07:13.490 --rc geninfo_all_blocks=1 00:07:13.490 --rc geninfo_unexecuted_blocks=1 00:07:13.490 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:13.490 ' 00:07:13.490 17:16:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:13.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.490 --rc genhtml_branch_coverage=1 00:07:13.490 --rc genhtml_function_coverage=1 00:07:13.490 --rc genhtml_legend=1 00:07:13.490 --rc geninfo_all_blocks=1 00:07:13.490 --rc geninfo_unexecuted_blocks=1 00:07:13.490 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:13.490 ' 00:07:13.490 17:16:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:13.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.490 --rc genhtml_branch_coverage=1 00:07:13.490 --rc genhtml_function_coverage=1 00:07:13.490 --rc genhtml_legend=1 00:07:13.490 --rc geninfo_all_blocks=1 00:07:13.490 --rc geninfo_unexecuted_blocks=1 00:07:13.490 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:13.490 ' 00:07:13.490 17:16:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:13.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.490 --rc genhtml_branch_coverage=1 00:07:13.490 --rc genhtml_function_coverage=1 00:07:13.490 --rc genhtml_legend=1 00:07:13.490 --rc geninfo_all_blocks=1 00:07:13.490 --rc geninfo_unexecuted_blocks=1 00:07:13.490 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:13.490 ' 00:07:13.490 17:16:33 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:13.490 17:16:33 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=364654 00:07:13.490 17:16:33 -- accel/accel_rpc.sh@15 -- # waitforlisten 364654 00:07:13.490 17:16:33 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:13.490 17:16:33 -- common/autotest_common.sh@829 -- # '[' -z 364654 ']' 00:07:13.490 17:16:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.490 17:16:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.490 17:16:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.490 17:16:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.490 17:16:33 -- common/autotest_common.sh@10 -- # set +x 00:07:13.490 [2024-11-09 17:16:33.198345] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:13.490 [2024-11-09 17:16:33.198435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid364654 ] 00:07:13.490 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.749 [2024-11-09 17:16:33.266961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.749 [2024-11-09 17:16:33.341332] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:13.749 [2024-11-09 17:16:33.341441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.318 17:16:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.318 17:16:34 -- common/autotest_common.sh@862 -- # return 0 00:07:14.318 17:16:34 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:14.318 17:16:34 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:14.318 17:16:34 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:14.318 17:16:34 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:14.318 17:16:34 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:14.318 17:16:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:14.318 17:16:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.318 17:16:34 -- common/autotest_common.sh@10 -- # set +x 00:07:14.318 ************************************ 00:07:14.318 START TEST accel_assign_opcode 00:07:14.318 ************************************ 00:07:14.318 17:16:34 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:14.318 17:16:34 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:14.318 17:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.318 17:16:34 -- common/autotest_common.sh@10 -- # set +x 00:07:14.318 [2024-11-09 17:16:34.035481] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:14.318 17:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.318 17:16:34 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:14.318 17:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.318 17:16:34 -- common/autotest_common.sh@10 -- # set +x 00:07:14.318 [2024-11-09 17:16:34.043496] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:14.318 17:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.318 17:16:34 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:14.318 17:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.318 17:16:34 -- common/autotest_common.sh@10 -- # set +x 00:07:14.578 17:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.578 17:16:34 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:14.578 17:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.578 17:16:34 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:14.578 17:16:34 -- common/autotest_common.sh@10 -- # set +x 00:07:14.578 17:16:34 -- accel/accel_rpc.sh@42 -- # grep software 00:07:14.578 17:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.578 software 00:07:14.578 00:07:14.578 real 0m0.227s 00:07:14.578 user 0m0.048s 00:07:14.578 sys 0m0.009s 00:07:14.578 17:16:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.578 17:16:34 -- common/autotest_common.sh@10 -- # set +x 00:07:14.578 ************************************ 00:07:14.578 END TEST accel_assign_opcode 00:07:14.578 ************************************ 00:07:14.578 17:16:34 -- accel/accel_rpc.sh@55 -- # killprocess 364654 00:07:14.578 17:16:34 -- common/autotest_common.sh@936 -- # '[' -z 364654 ']' 00:07:14.578 17:16:34 -- common/autotest_common.sh@940 -- # kill -0 364654 00:07:14.578 17:16:34 -- common/autotest_common.sh@941 -- # uname 00:07:14.578 17:16:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:14.579 17:16:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 364654 00:07:14.838 17:16:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:14.838 17:16:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:14.838 17:16:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 364654' 00:07:14.838 killing process with pid 364654 00:07:14.838 17:16:34 -- common/autotest_common.sh@955 -- # kill 364654 00:07:14.838 17:16:34 -- common/autotest_common.sh@960 -- # wait 364654 00:07:15.097 00:07:15.097 real 0m1.675s 00:07:15.097 user 0m1.715s 00:07:15.097 sys 0m0.484s 00:07:15.097 17:16:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.097 17:16:34 -- common/autotest_common.sh@10 -- # set +x 00:07:15.097 ************************************ 00:07:15.097 END TEST accel_rpc 00:07:15.097 ************************************ 00:07:15.097 17:16:34 -- spdk/autotest.sh@178 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:15.097 17:16:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:15.097 17:16:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.097 17:16:34 -- common/autotest_common.sh@10 -- # set +x 00:07:15.097 ************************************ 00:07:15.097 START TEST app_cmdline 00:07:15.097 ************************************ 00:07:15.097 17:16:34 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:15.097 * Looking for test storage... 00:07:15.097 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:15.097 17:16:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:15.097 17:16:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:15.097 17:16:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:15.357 17:16:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:15.357 17:16:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:15.357 17:16:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:15.357 17:16:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:15.357 17:16:34 -- scripts/common.sh@335 -- # IFS=.-: 00:07:15.357 17:16:34 -- scripts/common.sh@335 -- # read -ra ver1 00:07:15.357 17:16:34 -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.357 17:16:34 -- scripts/common.sh@336 -- # read -ra ver2 00:07:15.357 17:16:34 -- scripts/common.sh@337 -- # local 'op=<' 00:07:15.358 17:16:34 -- scripts/common.sh@339 -- # ver1_l=2 00:07:15.358 17:16:34 -- scripts/common.sh@340 -- # ver2_l=1 00:07:15.358 17:16:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:15.358 17:16:34 -- scripts/common.sh@343 -- # case "$op" in 00:07:15.358 17:16:34 -- scripts/common.sh@344 -- # : 1 00:07:15.358 17:16:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:15.358 17:16:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.358 17:16:34 -- scripts/common.sh@364 -- # decimal 1 00:07:15.358 17:16:34 -- scripts/common.sh@352 -- # local d=1 00:07:15.358 17:16:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.358 17:16:34 -- scripts/common.sh@354 -- # echo 1 00:07:15.358 17:16:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:15.358 17:16:34 -- scripts/common.sh@365 -- # decimal 2 00:07:15.358 17:16:34 -- scripts/common.sh@352 -- # local d=2 00:07:15.358 17:16:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.358 17:16:34 -- scripts/common.sh@354 -- # echo 2 00:07:15.358 17:16:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:15.358 17:16:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:15.358 17:16:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:15.358 17:16:34 -- scripts/common.sh@367 -- # return 0 00:07:15.358 17:16:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.358 17:16:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:15.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.358 --rc genhtml_branch_coverage=1 00:07:15.358 --rc genhtml_function_coverage=1 00:07:15.358 --rc genhtml_legend=1 00:07:15.358 --rc geninfo_all_blocks=1 00:07:15.358 --rc geninfo_unexecuted_blocks=1 00:07:15.358 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:15.358 ' 00:07:15.358 17:16:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:15.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.358 --rc genhtml_branch_coverage=1 00:07:15.358 --rc genhtml_function_coverage=1 00:07:15.358 --rc genhtml_legend=1 00:07:15.358 --rc geninfo_all_blocks=1 00:07:15.358 --rc geninfo_unexecuted_blocks=1 00:07:15.358 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:15.358 ' 00:07:15.358 17:16:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:15.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.358 --rc genhtml_branch_coverage=1 00:07:15.358 --rc genhtml_function_coverage=1 00:07:15.358 --rc genhtml_legend=1 00:07:15.358 --rc geninfo_all_blocks=1 00:07:15.358 --rc geninfo_unexecuted_blocks=1 00:07:15.358 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:15.358 ' 00:07:15.358 17:16:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:15.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.358 --rc genhtml_branch_coverage=1 00:07:15.358 --rc genhtml_function_coverage=1 00:07:15.358 --rc genhtml_legend=1 00:07:15.358 --rc geninfo_all_blocks=1 00:07:15.358 --rc geninfo_unexecuted_blocks=1 00:07:15.358 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:15.358 ' 00:07:15.358 17:16:34 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:15.358 17:16:34 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:15.358 17:16:34 -- app/cmdline.sh@17 -- # spdk_tgt_pid=365062 00:07:15.358 17:16:34 -- app/cmdline.sh@18 -- # waitforlisten 365062 00:07:15.358 17:16:34 -- common/autotest_common.sh@829 -- # '[' -z 365062 ']' 00:07:15.358 17:16:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.358 17:16:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.358 17:16:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.358 17:16:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.358 17:16:34 -- common/autotest_common.sh@10 -- # set +x 00:07:15.358 [2024-11-09 17:16:34.912906] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:15.358 [2024-11-09 17:16:34.912965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid365062 ] 00:07:15.358 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.358 [2024-11-09 17:16:34.970963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.358 [2024-11-09 17:16:35.038830] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:15.358 [2024-11-09 17:16:35.038933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.296 17:16:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:16.296 17:16:35 -- common/autotest_common.sh@862 -- # return 0 00:07:16.296 17:16:35 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:16.296 { 00:07:16.296 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:07:16.296 "fields": { 00:07:16.296 "major": 24, 00:07:16.296 "minor": 1, 00:07:16.296 "patch": 1, 00:07:16.296 "suffix": "-pre", 00:07:16.296 "commit": "c13c99a5e" 00:07:16.296 } 00:07:16.296 } 00:07:16.296 17:16:35 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:16.296 17:16:35 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:16.296 17:16:35 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:16.296 17:16:35 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:16.296 17:16:35 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:16.296 17:16:35 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:16.296 17:16:35 -- app/cmdline.sh@26 -- # sort 00:07:16.296 17:16:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.296 17:16:35 -- common/autotest_common.sh@10 -- # set +x 00:07:16.296 17:16:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.296 17:16:35 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:16.296 17:16:35 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:16.296 17:16:35 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:16.296 17:16:35 -- common/autotest_common.sh@650 -- # local es=0 00:07:16.296 17:16:35 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:16.296 17:16:35 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:16.296 17:16:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.296 17:16:35 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:16.296 17:16:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.296 17:16:35 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:16.296 17:16:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.296 17:16:35 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:16.296 17:16:35 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:07:16.296 17:16:35 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:16.556 request: 00:07:16.556 { 00:07:16.556 "method": "env_dpdk_get_mem_stats", 00:07:16.556 "req_id": 1 00:07:16.556 } 00:07:16.556 Got JSON-RPC error response 00:07:16.556 response: 00:07:16.556 { 00:07:16.556 "code": -32601, 00:07:16.556 "message": "Method not found" 00:07:16.556 } 00:07:16.556 17:16:36 -- common/autotest_common.sh@653 -- # es=1 00:07:16.556 17:16:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:16.556 17:16:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:16.556 17:16:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:16.556 17:16:36 -- app/cmdline.sh@1 -- # killprocess 365062 00:07:16.556 17:16:36 -- common/autotest_common.sh@936 -- # '[' -z 365062 ']' 00:07:16.556 17:16:36 -- common/autotest_common.sh@940 -- # kill -0 365062 00:07:16.556 17:16:36 -- common/autotest_common.sh@941 -- # uname 00:07:16.556 17:16:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:16.556 17:16:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 365062 00:07:16.556 17:16:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:16.556 17:16:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:16.556 17:16:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 365062' 00:07:16.556 killing process with pid 365062 00:07:16.556 17:16:36 -- common/autotest_common.sh@955 -- # kill 365062 00:07:16.556 17:16:36 -- common/autotest_common.sh@960 -- # wait 365062 00:07:16.815 00:07:16.815 real 0m1.765s 00:07:16.815 user 0m2.022s 00:07:16.815 sys 0m0.507s 00:07:16.815 17:16:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.815 17:16:36 -- common/autotest_common.sh@10 -- # set +x 00:07:16.815 ************************************ 00:07:16.815 END TEST app_cmdline 00:07:16.815 ************************************ 00:07:16.815 17:16:36 -- spdk/autotest.sh@179 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:16.815 17:16:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:16.815 17:16:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.815 17:16:36 -- common/autotest_common.sh@10 -- # set +x 00:07:16.815 ************************************ 00:07:16.815 START TEST version 00:07:16.815 ************************************ 00:07:16.815 17:16:36 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:17.075 * Looking for test storage... 00:07:17.076 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:17.076 17:16:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:17.076 17:16:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:17.076 17:16:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:17.076 17:16:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:17.076 17:16:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:17.076 17:16:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:17.076 17:16:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:17.076 17:16:36 -- scripts/common.sh@335 -- # IFS=.-: 00:07:17.076 17:16:36 -- scripts/common.sh@335 -- # read -ra ver1 00:07:17.076 17:16:36 -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.076 17:16:36 -- scripts/common.sh@336 -- # read -ra ver2 00:07:17.076 17:16:36 -- scripts/common.sh@337 -- # local 'op=<' 00:07:17.076 17:16:36 -- scripts/common.sh@339 -- # ver1_l=2 00:07:17.076 17:16:36 -- scripts/common.sh@340 -- # ver2_l=1 00:07:17.076 17:16:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:17.076 17:16:36 -- scripts/common.sh@343 -- # case "$op" in 00:07:17.076 17:16:36 -- scripts/common.sh@344 -- # : 1 00:07:17.076 17:16:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:17.076 17:16:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.076 17:16:36 -- scripts/common.sh@364 -- # decimal 1 00:07:17.076 17:16:36 -- scripts/common.sh@352 -- # local d=1 00:07:17.076 17:16:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.076 17:16:36 -- scripts/common.sh@354 -- # echo 1 00:07:17.076 17:16:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:17.076 17:16:36 -- scripts/common.sh@365 -- # decimal 2 00:07:17.076 17:16:36 -- scripts/common.sh@352 -- # local d=2 00:07:17.076 17:16:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.076 17:16:36 -- scripts/common.sh@354 -- # echo 2 00:07:17.076 17:16:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:17.076 17:16:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:17.076 17:16:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:17.076 17:16:36 -- scripts/common.sh@367 -- # return 0 00:07:17.076 17:16:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.076 17:16:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:17.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.076 --rc genhtml_branch_coverage=1 00:07:17.076 --rc genhtml_function_coverage=1 00:07:17.076 --rc genhtml_legend=1 00:07:17.076 --rc geninfo_all_blocks=1 00:07:17.076 --rc geninfo_unexecuted_blocks=1 00:07:17.076 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:17.076 ' 00:07:17.076 17:16:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:17.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.076 --rc genhtml_branch_coverage=1 00:07:17.076 --rc genhtml_function_coverage=1 00:07:17.076 --rc genhtml_legend=1 00:07:17.076 --rc geninfo_all_blocks=1 00:07:17.076 --rc geninfo_unexecuted_blocks=1 00:07:17.076 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:17.076 ' 00:07:17.076 17:16:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:17.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.076 --rc genhtml_branch_coverage=1 00:07:17.076 --rc genhtml_function_coverage=1 00:07:17.076 --rc genhtml_legend=1 00:07:17.076 --rc geninfo_all_blocks=1 00:07:17.076 --rc geninfo_unexecuted_blocks=1 00:07:17.076 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:17.076 ' 00:07:17.076 17:16:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:17.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.076 --rc genhtml_branch_coverage=1 00:07:17.076 --rc genhtml_function_coverage=1 00:07:17.076 --rc genhtml_legend=1 00:07:17.076 --rc geninfo_all_blocks=1 00:07:17.076 --rc geninfo_unexecuted_blocks=1 00:07:17.076 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:17.076 ' 00:07:17.076 17:16:36 -- app/version.sh@17 -- # get_header_version major 00:07:17.076 17:16:36 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:17.076 17:16:36 -- app/version.sh@14 -- # tr -d '"' 00:07:17.076 17:16:36 -- app/version.sh@14 -- # cut -f2 00:07:17.076 17:16:36 -- app/version.sh@17 -- # major=24 00:07:17.076 17:16:36 -- app/version.sh@18 -- # get_header_version minor 00:07:17.076 17:16:36 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:17.076 17:16:36 -- app/version.sh@14 -- # tr -d '"' 00:07:17.076 17:16:36 -- app/version.sh@14 -- # cut -f2 00:07:17.076 17:16:36 -- app/version.sh@18 -- # minor=1 00:07:17.076 17:16:36 -- app/version.sh@19 -- # get_header_version patch 00:07:17.076 17:16:36 -- app/version.sh@14 -- # tr -d '"' 00:07:17.076 17:16:36 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:17.076 17:16:36 -- app/version.sh@14 -- # cut -f2 00:07:17.076 17:16:36 -- app/version.sh@19 -- # patch=1 00:07:17.076 17:16:36 -- app/version.sh@20 -- # get_header_version suffix 00:07:17.076 17:16:36 -- app/version.sh@14 -- # tr -d '"' 00:07:17.076 17:16:36 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:17.076 17:16:36 -- app/version.sh@14 -- # cut -f2 00:07:17.076 17:16:36 -- app/version.sh@20 -- # suffix=-pre 00:07:17.076 17:16:36 -- app/version.sh@22 -- # version=24.1 00:07:17.076 17:16:36 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:17.076 17:16:36 -- app/version.sh@25 -- # version=24.1.1 00:07:17.076 17:16:36 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:17.076 17:16:36 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:17.076 17:16:36 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:17.076 17:16:36 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:17.076 17:16:36 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:17.076 00:07:17.076 real 0m0.263s 00:07:17.076 user 0m0.147s 00:07:17.076 sys 0m0.162s 00:07:17.076 17:16:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.076 17:16:36 -- common/autotest_common.sh@10 -- # set +x 00:07:17.076 ************************************ 00:07:17.076 END TEST version 00:07:17.076 ************************************ 00:07:17.076 17:16:36 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:17.076 17:16:36 -- spdk/autotest.sh@191 -- # uname -s 00:07:17.076 17:16:36 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:17.076 17:16:36 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:17.076 17:16:36 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:17.076 17:16:36 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:07:17.076 17:16:36 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:07:17.076 17:16:36 -- spdk/autotest.sh@255 -- # timing_exit lib 00:07:17.076 17:16:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:17.076 17:16:36 -- common/autotest_common.sh@10 -- # set +x 00:07:17.337 17:16:36 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:07:17.337 17:16:36 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:07:17.337 17:16:36 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:07:17.337 17:16:36 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:07:17.337 17:16:36 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:07:17.337 17:16:36 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:07:17.337 17:16:36 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:07:17.337 17:16:36 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:07:17.337 17:16:36 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:07:17.337 17:16:36 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:07:17.337 17:16:36 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:07:17.337 17:16:36 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:07:17.337 17:16:36 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:07:17.337 17:16:36 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:07:17.337 17:16:36 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:07:17.337 17:16:36 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:07:17.337 17:16:36 -- spdk/autotest.sh@361 -- # [[ 1 -eq 1 ]] 00:07:17.337 17:16:36 -- spdk/autotest.sh@362 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:17.337 17:16:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:17.337 17:16:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.337 17:16:36 -- common/autotest_common.sh@10 -- # set +x 00:07:17.337 ************************************ 00:07:17.337 START TEST llvm_fuzz 00:07:17.337 ************************************ 00:07:17.337 17:16:36 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:17.337 * Looking for test storage... 00:07:17.337 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:07:17.337 17:16:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:17.337 17:16:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:17.337 17:16:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:17.337 17:16:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:17.337 17:16:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:17.337 17:16:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:17.337 17:16:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:17.337 17:16:37 -- scripts/common.sh@335 -- # IFS=.-: 00:07:17.337 17:16:37 -- scripts/common.sh@335 -- # read -ra ver1 00:07:17.337 17:16:37 -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.337 17:16:37 -- scripts/common.sh@336 -- # read -ra ver2 00:07:17.337 17:16:37 -- scripts/common.sh@337 -- # local 'op=<' 00:07:17.337 17:16:37 -- scripts/common.sh@339 -- # ver1_l=2 00:07:17.337 17:16:37 -- scripts/common.sh@340 -- # ver2_l=1 00:07:17.337 17:16:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:17.337 17:16:37 -- scripts/common.sh@343 -- # case "$op" in 00:07:17.337 17:16:37 -- scripts/common.sh@344 -- # : 1 00:07:17.337 17:16:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:17.337 17:16:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.337 17:16:37 -- scripts/common.sh@364 -- # decimal 1 00:07:17.337 17:16:37 -- scripts/common.sh@352 -- # local d=1 00:07:17.337 17:16:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.337 17:16:37 -- scripts/common.sh@354 -- # echo 1 00:07:17.337 17:16:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:17.337 17:16:37 -- scripts/common.sh@365 -- # decimal 2 00:07:17.337 17:16:37 -- scripts/common.sh@352 -- # local d=2 00:07:17.337 17:16:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.337 17:16:37 -- scripts/common.sh@354 -- # echo 2 00:07:17.337 17:16:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:17.337 17:16:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:17.337 17:16:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:17.337 17:16:37 -- scripts/common.sh@367 -- # return 0 00:07:17.337 17:16:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.337 17:16:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:17.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.337 --rc genhtml_branch_coverage=1 00:07:17.337 --rc genhtml_function_coverage=1 00:07:17.337 --rc genhtml_legend=1 00:07:17.337 --rc geninfo_all_blocks=1 00:07:17.337 --rc geninfo_unexecuted_blocks=1 00:07:17.337 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:17.337 ' 00:07:17.337 17:16:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:17.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.337 --rc genhtml_branch_coverage=1 00:07:17.337 --rc genhtml_function_coverage=1 00:07:17.337 --rc genhtml_legend=1 00:07:17.337 --rc geninfo_all_blocks=1 00:07:17.337 --rc geninfo_unexecuted_blocks=1 00:07:17.337 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:17.337 ' 00:07:17.337 17:16:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:17.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.337 --rc genhtml_branch_coverage=1 00:07:17.337 --rc genhtml_function_coverage=1 00:07:17.337 --rc genhtml_legend=1 00:07:17.337 --rc geninfo_all_blocks=1 00:07:17.337 --rc geninfo_unexecuted_blocks=1 00:07:17.337 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:17.337 ' 00:07:17.337 17:16:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:17.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.337 --rc genhtml_branch_coverage=1 00:07:17.337 --rc genhtml_function_coverage=1 00:07:17.337 --rc genhtml_legend=1 00:07:17.337 --rc geninfo_all_blocks=1 00:07:17.337 --rc geninfo_unexecuted_blocks=1 00:07:17.337 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:17.337 ' 00:07:17.337 17:16:37 -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:07:17.337 17:16:37 -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:07:17.337 17:16:37 -- common/autotest_common.sh@548 -- # fuzzers=() 00:07:17.337 17:16:37 -- common/autotest_common.sh@548 -- # local fuzzers 00:07:17.337 17:16:37 -- common/autotest_common.sh@550 -- # [[ -n '' ]] 00:07:17.337 17:16:37 -- common/autotest_common.sh@553 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:07:17.337 17:16:37 -- common/autotest_common.sh@554 -- # fuzzers=("${fuzzers[@]##*/}") 00:07:17.337 17:16:37 -- common/autotest_common.sh@557 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:07:17.337 17:16:37 -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:07:17.337 17:16:37 -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:07:17.337 17:16:37 -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:17.337 17:16:37 -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:17.337 17:16:37 -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:17.337 17:16:37 -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:17.337 17:16:37 -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:17.337 17:16:37 -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:17.337 17:16:37 -- fuzz/llvm.sh@19 -- # run_test nvmf_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:17.337 17:16:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:17.337 17:16:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.337 17:16:37 -- common/autotest_common.sh@10 -- # set +x 00:07:17.337 ************************************ 00:07:17.337 START TEST nvmf_fuzz 00:07:17.337 ************************************ 00:07:17.337 17:16:37 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:17.599 * Looking for test storage... 00:07:17.599 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:17.599 17:16:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:17.599 17:16:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:17.599 17:16:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:17.599 17:16:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:17.599 17:16:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:17.599 17:16:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:17.599 17:16:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:17.599 17:16:37 -- scripts/common.sh@335 -- # IFS=.-: 00:07:17.599 17:16:37 -- scripts/common.sh@335 -- # read -ra ver1 00:07:17.599 17:16:37 -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.599 17:16:37 -- scripts/common.sh@336 -- # read -ra ver2 00:07:17.599 17:16:37 -- scripts/common.sh@337 -- # local 'op=<' 00:07:17.599 17:16:37 -- scripts/common.sh@339 -- # ver1_l=2 00:07:17.599 17:16:37 -- scripts/common.sh@340 -- # ver2_l=1 00:07:17.599 17:16:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:17.599 17:16:37 -- scripts/common.sh@343 -- # case "$op" in 00:07:17.599 17:16:37 -- scripts/common.sh@344 -- # : 1 00:07:17.599 17:16:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:17.599 17:16:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.599 17:16:37 -- scripts/common.sh@364 -- # decimal 1 00:07:17.599 17:16:37 -- scripts/common.sh@352 -- # local d=1 00:07:17.599 17:16:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.599 17:16:37 -- scripts/common.sh@354 -- # echo 1 00:07:17.599 17:16:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:17.599 17:16:37 -- scripts/common.sh@365 -- # decimal 2 00:07:17.599 17:16:37 -- scripts/common.sh@352 -- # local d=2 00:07:17.599 17:16:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.599 17:16:37 -- scripts/common.sh@354 -- # echo 2 00:07:17.599 17:16:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:17.599 17:16:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:17.599 17:16:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:17.599 17:16:37 -- scripts/common.sh@367 -- # return 0 00:07:17.599 17:16:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.599 17:16:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:17.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.599 --rc genhtml_branch_coverage=1 00:07:17.599 --rc genhtml_function_coverage=1 00:07:17.599 --rc genhtml_legend=1 00:07:17.599 --rc geninfo_all_blocks=1 00:07:17.599 --rc geninfo_unexecuted_blocks=1 00:07:17.599 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:17.599 ' 00:07:17.599 17:16:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:17.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.599 --rc genhtml_branch_coverage=1 00:07:17.599 --rc genhtml_function_coverage=1 00:07:17.599 --rc genhtml_legend=1 00:07:17.599 --rc geninfo_all_blocks=1 00:07:17.599 --rc geninfo_unexecuted_blocks=1 00:07:17.599 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:17.599 ' 00:07:17.599 17:16:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:17.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.599 --rc genhtml_branch_coverage=1 00:07:17.599 --rc genhtml_function_coverage=1 00:07:17.599 --rc genhtml_legend=1 00:07:17.599 --rc geninfo_all_blocks=1 00:07:17.599 --rc geninfo_unexecuted_blocks=1 00:07:17.599 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:17.599 ' 00:07:17.599 17:16:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:17.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.599 --rc genhtml_branch_coverage=1 00:07:17.599 --rc genhtml_function_coverage=1 00:07:17.599 --rc genhtml_legend=1 00:07:17.599 --rc geninfo_all_blocks=1 00:07:17.599 --rc geninfo_unexecuted_blocks=1 00:07:17.599 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:17.599 ' 00:07:17.600 17:16:37 -- nvmf/run.sh@52 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:17.600 17:16:37 -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:17.600 17:16:37 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:17.600 17:16:37 -- common/autotest_common.sh@34 -- # set -e 00:07:17.600 17:16:37 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:17.600 17:16:37 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:17.600 17:16:37 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:17.600 17:16:37 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:17.600 17:16:37 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:17.600 17:16:37 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:17.600 17:16:37 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:17.600 17:16:37 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:17.600 17:16:37 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:17.600 17:16:37 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:17.600 17:16:37 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:17.600 17:16:37 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:17.600 17:16:37 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:17.600 17:16:37 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:17.600 17:16:37 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:17.600 17:16:37 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:17.600 17:16:37 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:17.600 17:16:37 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:17.600 17:16:37 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:17.600 17:16:37 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:17.600 17:16:37 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:17.600 17:16:37 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:17.600 17:16:37 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:17.600 17:16:37 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:17.600 17:16:37 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:17.600 17:16:37 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:17.600 17:16:37 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:17.600 17:16:37 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:17.600 17:16:37 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:17.600 17:16:37 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:17.600 17:16:37 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:17.600 17:16:37 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:17.600 17:16:37 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:17.600 17:16:37 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:17.600 17:16:37 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:17.600 17:16:37 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:17.600 17:16:37 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:17.600 17:16:37 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:17.600 17:16:37 -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:07:17.600 17:16:37 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:17.600 17:16:37 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:17.600 17:16:37 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:17.600 17:16:37 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:17.600 17:16:37 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:17.600 17:16:37 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:17.600 17:16:37 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:17.600 17:16:37 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:17.600 17:16:37 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:17.600 17:16:37 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:17.600 17:16:37 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:17.600 17:16:37 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:17.600 17:16:37 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:17.600 17:16:37 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:17.600 17:16:37 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:17.600 17:16:37 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:07:17.600 17:16:37 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:17.600 17:16:37 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:17.600 17:16:37 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:17.600 17:16:37 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:17.600 17:16:37 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:17.600 17:16:37 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:17.600 17:16:37 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:17.600 17:16:37 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:17.600 17:16:37 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:17.600 17:16:37 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:07:17.600 17:16:37 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:17.600 17:16:37 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:17.600 17:16:37 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:07:17.600 17:16:37 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:17.600 17:16:37 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:17.600 17:16:37 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:17.600 17:16:37 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:17.600 17:16:37 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:17.600 17:16:37 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:17.600 17:16:37 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:17.600 17:16:37 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:17.600 17:16:37 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:17.600 17:16:37 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:17.600 17:16:37 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:17.600 17:16:37 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:17.600 17:16:37 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:17.600 17:16:37 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:17.600 17:16:37 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:17.600 17:16:37 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:17.600 17:16:37 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:17.600 17:16:37 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:17.600 17:16:37 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:17.600 17:16:37 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:17.600 17:16:37 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:17.600 17:16:37 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:17.600 17:16:37 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:17.600 17:16:37 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:17.600 17:16:37 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:17.600 17:16:37 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:17.600 17:16:37 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:17.600 17:16:37 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:17.600 17:16:37 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:17.600 17:16:37 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:17.600 17:16:37 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:17.600 #define SPDK_CONFIG_H 00:07:17.600 #define SPDK_CONFIG_APPS 1 00:07:17.600 #define SPDK_CONFIG_ARCH native 00:07:17.600 #undef SPDK_CONFIG_ASAN 00:07:17.600 #undef SPDK_CONFIG_AVAHI 00:07:17.600 #undef SPDK_CONFIG_CET 00:07:17.600 #define SPDK_CONFIG_COVERAGE 1 00:07:17.600 #define SPDK_CONFIG_CROSS_PREFIX 00:07:17.600 #undef SPDK_CONFIG_CRYPTO 00:07:17.600 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:17.600 #undef SPDK_CONFIG_CUSTOMOCF 00:07:17.600 #undef SPDK_CONFIG_DAOS 00:07:17.600 #define SPDK_CONFIG_DAOS_DIR 00:07:17.600 #define SPDK_CONFIG_DEBUG 1 00:07:17.600 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:17.600 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:17.600 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:17.600 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:17.600 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:17.600 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:17.600 #define SPDK_CONFIG_EXAMPLES 1 00:07:17.600 #undef SPDK_CONFIG_FC 00:07:17.600 #define SPDK_CONFIG_FC_PATH 00:07:17.600 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:17.600 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:17.600 #undef SPDK_CONFIG_FUSE 00:07:17.600 #define SPDK_CONFIG_FUZZER 1 00:07:17.600 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:17.600 #undef SPDK_CONFIG_GOLANG 00:07:17.600 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:17.600 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:17.600 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:17.600 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:17.600 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:17.600 #define SPDK_CONFIG_IDXD 1 00:07:17.600 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:17.600 #undef SPDK_CONFIG_IPSEC_MB 00:07:17.600 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:17.600 #define SPDK_CONFIG_ISAL 1 00:07:17.600 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:17.600 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:17.600 #define SPDK_CONFIG_LIBDIR 00:07:17.600 #undef SPDK_CONFIG_LTO 00:07:17.600 #define SPDK_CONFIG_MAX_LCORES 00:07:17.600 #define SPDK_CONFIG_NVME_CUSE 1 00:07:17.600 #undef SPDK_CONFIG_OCF 00:07:17.601 #define SPDK_CONFIG_OCF_PATH 00:07:17.601 #define SPDK_CONFIG_OPENSSL_PATH 00:07:17.601 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:17.601 #undef SPDK_CONFIG_PGO_USE 00:07:17.601 #define SPDK_CONFIG_PREFIX /usr/local 00:07:17.601 #undef SPDK_CONFIG_RAID5F 00:07:17.601 #undef SPDK_CONFIG_RBD 00:07:17.601 #define SPDK_CONFIG_RDMA 1 00:07:17.601 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:17.601 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:17.601 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:17.601 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:17.601 #undef SPDK_CONFIG_SHARED 00:07:17.601 #undef SPDK_CONFIG_SMA 00:07:17.601 #define SPDK_CONFIG_TESTS 1 00:07:17.601 #undef SPDK_CONFIG_TSAN 00:07:17.601 #define SPDK_CONFIG_UBLK 1 00:07:17.601 #define SPDK_CONFIG_UBSAN 1 00:07:17.601 #undef SPDK_CONFIG_UNIT_TESTS 00:07:17.601 #undef SPDK_CONFIG_URING 00:07:17.601 #define SPDK_CONFIG_URING_PATH 00:07:17.601 #undef SPDK_CONFIG_URING_ZNS 00:07:17.601 #undef SPDK_CONFIG_USDT 00:07:17.601 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:17.601 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:17.601 #define SPDK_CONFIG_VFIO_USER 1 00:07:17.601 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:17.601 #define SPDK_CONFIG_VHOST 1 00:07:17.601 #define SPDK_CONFIG_VIRTIO 1 00:07:17.601 #undef SPDK_CONFIG_VTUNE 00:07:17.601 #define SPDK_CONFIG_VTUNE_DIR 00:07:17.601 #define SPDK_CONFIG_WERROR 1 00:07:17.601 #define SPDK_CONFIG_WPDK_DIR 00:07:17.601 #undef SPDK_CONFIG_XNVME 00:07:17.601 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:17.601 17:16:37 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:17.601 17:16:37 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:17.601 17:16:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.601 17:16:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.601 17:16:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.601 17:16:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.601 17:16:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.601 17:16:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.601 17:16:37 -- paths/export.sh@5 -- # export PATH 00:07:17.601 17:16:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.601 17:16:37 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:17.601 17:16:37 -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:17.601 17:16:37 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:17.601 17:16:37 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:17.601 17:16:37 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:17.601 17:16:37 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:17.601 17:16:37 -- pm/common@16 -- # TEST_TAG=N/A 00:07:17.601 17:16:37 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:17.601 17:16:37 -- common/autotest_common.sh@52 -- # : 1 00:07:17.601 17:16:37 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:17.601 17:16:37 -- common/autotest_common.sh@56 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:17.601 17:16:37 -- common/autotest_common.sh@58 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:17.601 17:16:37 -- common/autotest_common.sh@60 -- # : 1 00:07:17.601 17:16:37 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:17.601 17:16:37 -- common/autotest_common.sh@62 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:17.601 17:16:37 -- common/autotest_common.sh@64 -- # : 00:07:17.601 17:16:37 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:17.601 17:16:37 -- common/autotest_common.sh@66 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:17.601 17:16:37 -- common/autotest_common.sh@68 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:17.601 17:16:37 -- common/autotest_common.sh@70 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:17.601 17:16:37 -- common/autotest_common.sh@72 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:17.601 17:16:37 -- common/autotest_common.sh@74 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:17.601 17:16:37 -- common/autotest_common.sh@76 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:17.601 17:16:37 -- common/autotest_common.sh@78 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:17.601 17:16:37 -- common/autotest_common.sh@80 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:17.601 17:16:37 -- common/autotest_common.sh@82 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:17.601 17:16:37 -- common/autotest_common.sh@84 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:17.601 17:16:37 -- common/autotest_common.sh@86 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:17.601 17:16:37 -- common/autotest_common.sh@88 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:17.601 17:16:37 -- common/autotest_common.sh@90 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:17.601 17:16:37 -- common/autotest_common.sh@92 -- # : 1 00:07:17.601 17:16:37 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:17.601 17:16:37 -- common/autotest_common.sh@94 -- # : 1 00:07:17.601 17:16:37 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:17.601 17:16:37 -- common/autotest_common.sh@96 -- # : rdma 00:07:17.601 17:16:37 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:17.601 17:16:37 -- common/autotest_common.sh@98 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:17.601 17:16:37 -- common/autotest_common.sh@100 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:17.601 17:16:37 -- common/autotest_common.sh@102 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:17.601 17:16:37 -- common/autotest_common.sh@104 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:17.601 17:16:37 -- common/autotest_common.sh@106 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:17.601 17:16:37 -- common/autotest_common.sh@108 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:17.601 17:16:37 -- common/autotest_common.sh@110 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:17.601 17:16:37 -- common/autotest_common.sh@112 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:17.601 17:16:37 -- common/autotest_common.sh@114 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:17.601 17:16:37 -- common/autotest_common.sh@116 -- # : 1 00:07:17.601 17:16:37 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:17.601 17:16:37 -- common/autotest_common.sh@118 -- # : 00:07:17.601 17:16:37 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:17.601 17:16:37 -- common/autotest_common.sh@120 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:17.601 17:16:37 -- common/autotest_common.sh@122 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:17.601 17:16:37 -- common/autotest_common.sh@124 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:17.601 17:16:37 -- common/autotest_common.sh@126 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:17.601 17:16:37 -- common/autotest_common.sh@128 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:17.601 17:16:37 -- common/autotest_common.sh@130 -- # : 0 00:07:17.601 17:16:37 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:17.601 17:16:37 -- common/autotest_common.sh@132 -- # : 00:07:17.601 17:16:37 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:17.601 17:16:37 -- common/autotest_common.sh@134 -- # : true 00:07:17.601 17:16:37 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:17.601 17:16:37 -- common/autotest_common.sh@136 -- # : 0 00:07:17.602 17:16:37 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:17.602 17:16:37 -- common/autotest_common.sh@138 -- # : 0 00:07:17.602 17:16:37 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:17.602 17:16:37 -- common/autotest_common.sh@140 -- # : 0 00:07:17.602 17:16:37 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:17.602 17:16:37 -- common/autotest_common.sh@142 -- # : 0 00:07:17.602 17:16:37 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:17.602 17:16:37 -- common/autotest_common.sh@144 -- # : 0 00:07:17.602 17:16:37 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:17.602 17:16:37 -- common/autotest_common.sh@146 -- # : 0 00:07:17.602 17:16:37 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:17.602 17:16:37 -- common/autotest_common.sh@148 -- # : 00:07:17.602 17:16:37 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:17.602 17:16:37 -- common/autotest_common.sh@150 -- # : 0 00:07:17.602 17:16:37 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:17.602 17:16:37 -- common/autotest_common.sh@152 -- # : 0 00:07:17.602 17:16:37 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:17.602 17:16:37 -- common/autotest_common.sh@154 -- # : 0 00:07:17.602 17:16:37 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:17.602 17:16:37 -- common/autotest_common.sh@156 -- # : 0 00:07:17.602 17:16:37 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:17.602 17:16:37 -- common/autotest_common.sh@158 -- # : 0 00:07:17.602 17:16:37 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:17.602 17:16:37 -- common/autotest_common.sh@160 -- # : 0 00:07:17.602 17:16:37 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:17.602 17:16:37 -- common/autotest_common.sh@163 -- # : 00:07:17.602 17:16:37 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:17.602 17:16:37 -- common/autotest_common.sh@165 -- # : 0 00:07:17.602 17:16:37 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:17.602 17:16:37 -- common/autotest_common.sh@167 -- # : 0 00:07:17.602 17:16:37 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:17.602 17:16:37 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:17.602 17:16:37 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:17.602 17:16:37 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:17.602 17:16:37 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:17.602 17:16:37 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:17.602 17:16:37 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:17.602 17:16:37 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:17.602 17:16:37 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:17.602 17:16:37 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:17.602 17:16:37 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:17.602 17:16:37 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:17.602 17:16:37 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:17.602 17:16:37 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:17.602 17:16:37 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:17.602 17:16:37 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:17.602 17:16:37 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:17.602 17:16:37 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:17.602 17:16:37 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:17.602 17:16:37 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:17.602 17:16:37 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:17.863 17:16:37 -- common/autotest_common.sh@196 -- # cat 00:07:17.863 17:16:37 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:17.863 17:16:37 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:17.863 17:16:37 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:17.863 17:16:37 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:17.863 17:16:37 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:17.863 17:16:37 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:17.863 17:16:37 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:17.863 17:16:37 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:17.863 17:16:37 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:17.863 17:16:37 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:17.863 17:16:37 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:17.863 17:16:37 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:17.863 17:16:37 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:17.863 17:16:37 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:17.863 17:16:37 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:17.863 17:16:37 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:17.863 17:16:37 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:17.863 17:16:37 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:17.863 17:16:37 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:17.863 17:16:37 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:07:17.863 17:16:37 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:07:17.863 17:16:37 -- common/autotest_common.sh@249 -- # _LCOV= 00:07:17.863 17:16:37 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:07:17.863 17:16:37 -- common/autotest_common.sh@250 -- # [[ 1 -eq 1 ]] 00:07:17.863 17:16:37 -- common/autotest_common.sh@250 -- # _LCOV=1 00:07:17.863 17:16:37 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:17.863 17:16:37 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:07:17.863 17:16:37 -- common/autotest_common.sh@255 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:17.863 17:16:37 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:07:17.863 17:16:37 -- common/autotest_common.sh@259 -- # export valgrind= 00:07:17.863 17:16:37 -- common/autotest_common.sh@259 -- # valgrind= 00:07:17.863 17:16:37 -- common/autotest_common.sh@265 -- # uname -s 00:07:17.863 17:16:37 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:07:17.863 17:16:37 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:07:17.863 17:16:37 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:07:17.863 17:16:37 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:07:17.863 17:16:37 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:07:17.863 17:16:37 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:07:17.863 17:16:37 -- common/autotest_common.sh@275 -- # MAKE=make 00:07:17.863 17:16:37 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j112 00:07:17.863 17:16:37 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:07:17.863 17:16:37 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:07:17.863 17:16:37 -- common/autotest_common.sh@294 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:17.863 17:16:37 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:07:17.863 17:16:37 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:07:17.863 17:16:37 -- common/autotest_common.sh@319 -- # [[ -z 365509 ]] 00:07:17.863 17:16:37 -- common/autotest_common.sh@319 -- # kill -0 365509 00:07:17.863 17:16:37 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:07:17.863 17:16:37 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:07:17.863 17:16:37 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:07:17.863 17:16:37 -- common/autotest_common.sh@332 -- # local mount target_dir 00:07:17.863 17:16:37 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:07:17.863 17:16:37 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:07:17.863 17:16:37 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:07:17.863 17:16:37 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:07:17.863 17:16:37 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.VIbW9z 00:07:17.863 17:16:37 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:17.863 17:16:37 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:07:17.863 17:16:37 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:07:17.863 17:16:37 -- common/autotest_common.sh@356 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.VIbW9z/tests/nvmf /tmp/spdk.VIbW9z 00:07:17.863 17:16:37 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:07:17.863 17:16:37 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:17.863 17:16:37 -- common/autotest_common.sh@328 -- # df -T 00:07:17.863 17:16:37 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:07:17.863 17:16:37 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_devtmpfs 00:07:17.863 17:16:37 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:07:17.863 17:16:37 -- common/autotest_common.sh@363 -- # avails["$mount"]=67108864 00:07:17.863 17:16:37 -- common/autotest_common.sh@363 -- # sizes["$mount"]=67108864 00:07:17.863 17:16:37 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:07:17.863 17:16:37 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:17.863 17:16:37 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/pmem0 00:07:17.863 17:16:37 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext2 00:07:17.863 17:16:37 -- common/autotest_common.sh@363 -- # avails["$mount"]=4096 00:07:17.863 17:16:37 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5284429824 00:07:17.863 17:16:37 -- common/autotest_common.sh@364 -- # uses["$mount"]=5284425728 00:07:17.863 17:16:37 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:17.863 17:16:37 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_root 00:07:17.863 17:16:37 -- common/autotest_common.sh@362 -- # fss["$mount"]=overlay 00:07:17.863 17:16:37 -- common/autotest_common.sh@363 -- # avails["$mount"]=53665386496 00:07:17.863 17:16:37 -- common/autotest_common.sh@363 -- # sizes["$mount"]=61730578432 00:07:17.863 17:16:37 -- common/autotest_common.sh@364 -- # uses["$mount"]=8065191936 00:07:17.863 17:16:37 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:17.863 17:16:37 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:17.863 17:16:37 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:17.863 17:16:37 -- common/autotest_common.sh@363 -- # avails["$mount"]=30862696448 00:07:17.863 17:16:37 -- common/autotest_common.sh@363 -- # sizes["$mount"]=30865289216 00:07:17.863 17:16:37 -- common/autotest_common.sh@364 -- # uses["$mount"]=2592768 00:07:17.864 17:16:37 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:17.864 17:16:37 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:17.864 17:16:37 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:17.864 17:16:37 -- common/autotest_common.sh@363 -- # avails["$mount"]=12340125696 00:07:17.864 17:16:37 -- common/autotest_common.sh@363 -- # sizes["$mount"]=12346118144 00:07:17.864 17:16:37 -- common/autotest_common.sh@364 -- # uses["$mount"]=5992448 00:07:17.864 17:16:37 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:17.864 17:16:37 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:17.864 17:16:37 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:17.864 17:16:37 -- common/autotest_common.sh@363 -- # avails["$mount"]=30864220160 00:07:17.864 17:16:37 -- common/autotest_common.sh@363 -- # sizes["$mount"]=30865289216 00:07:17.864 17:16:37 -- common/autotest_common.sh@364 -- # uses["$mount"]=1069056 00:07:17.864 17:16:37 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:17.864 17:16:37 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:17.864 17:16:37 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:17.864 17:16:37 -- common/autotest_common.sh@363 -- # avails["$mount"]=6173044736 00:07:17.864 17:16:37 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6173057024 00:07:17.864 17:16:37 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:07:17.864 17:16:37 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:17.864 17:16:37 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:07:17.864 * Looking for test storage... 00:07:17.864 17:16:37 -- common/autotest_common.sh@369 -- # local target_space new_size 00:07:17.864 17:16:37 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:07:17.864 17:16:37 -- common/autotest_common.sh@373 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:17.864 17:16:37 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:17.864 17:16:37 -- common/autotest_common.sh@373 -- # mount=/ 00:07:17.864 17:16:37 -- common/autotest_common.sh@375 -- # target_space=53665386496 00:07:17.864 17:16:37 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:07:17.864 17:16:37 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:07:17.864 17:16:37 -- common/autotest_common.sh@381 -- # [[ overlay == tmpfs ]] 00:07:17.864 17:16:37 -- common/autotest_common.sh@381 -- # [[ overlay == ramfs ]] 00:07:17.864 17:16:37 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:07:17.864 17:16:37 -- common/autotest_common.sh@382 -- # new_size=10279784448 00:07:17.864 17:16:37 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:17.864 17:16:37 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:17.864 17:16:37 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:17.864 17:16:37 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:17.864 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:17.864 17:16:37 -- common/autotest_common.sh@390 -- # return 0 00:07:17.864 17:16:37 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:07:17.864 17:16:37 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:07:17.864 17:16:37 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:17.864 17:16:37 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:17.864 17:16:37 -- common/autotest_common.sh@1682 -- # true 00:07:17.864 17:16:37 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:07:17.864 17:16:37 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:17.864 17:16:37 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:17.864 17:16:37 -- common/autotest_common.sh@27 -- # exec 00:07:17.864 17:16:37 -- common/autotest_common.sh@29 -- # exec 00:07:17.864 17:16:37 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:17.864 17:16:37 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:17.864 17:16:37 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:17.864 17:16:37 -- common/autotest_common.sh@18 -- # set -x 00:07:17.864 17:16:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:17.864 17:16:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:17.864 17:16:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:17.864 17:16:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:17.864 17:16:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:17.864 17:16:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:17.864 17:16:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:17.864 17:16:37 -- scripts/common.sh@335 -- # IFS=.-: 00:07:17.864 17:16:37 -- scripts/common.sh@335 -- # read -ra ver1 00:07:17.864 17:16:37 -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.864 17:16:37 -- scripts/common.sh@336 -- # read -ra ver2 00:07:17.864 17:16:37 -- scripts/common.sh@337 -- # local 'op=<' 00:07:17.864 17:16:37 -- scripts/common.sh@339 -- # ver1_l=2 00:07:17.864 17:16:37 -- scripts/common.sh@340 -- # ver2_l=1 00:07:17.864 17:16:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:17.864 17:16:37 -- scripts/common.sh@343 -- # case "$op" in 00:07:17.864 17:16:37 -- scripts/common.sh@344 -- # : 1 00:07:17.864 17:16:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:17.864 17:16:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.864 17:16:37 -- scripts/common.sh@364 -- # decimal 1 00:07:17.864 17:16:37 -- scripts/common.sh@352 -- # local d=1 00:07:17.864 17:16:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.864 17:16:37 -- scripts/common.sh@354 -- # echo 1 00:07:17.864 17:16:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:17.864 17:16:37 -- scripts/common.sh@365 -- # decimal 2 00:07:17.864 17:16:37 -- scripts/common.sh@352 -- # local d=2 00:07:17.864 17:16:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.864 17:16:37 -- scripts/common.sh@354 -- # echo 2 00:07:17.864 17:16:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:17.864 17:16:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:17.864 17:16:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:17.864 17:16:37 -- scripts/common.sh@367 -- # return 0 00:07:17.864 17:16:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.864 17:16:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:17.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.864 --rc genhtml_branch_coverage=1 00:07:17.864 --rc genhtml_function_coverage=1 00:07:17.864 --rc genhtml_legend=1 00:07:17.864 --rc geninfo_all_blocks=1 00:07:17.864 --rc geninfo_unexecuted_blocks=1 00:07:17.864 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:17.864 ' 00:07:17.864 17:16:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:17.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.864 --rc genhtml_branch_coverage=1 00:07:17.864 --rc genhtml_function_coverage=1 00:07:17.864 --rc genhtml_legend=1 00:07:17.864 --rc geninfo_all_blocks=1 00:07:17.864 --rc geninfo_unexecuted_blocks=1 00:07:17.864 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:17.864 ' 00:07:17.864 17:16:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:17.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.864 --rc genhtml_branch_coverage=1 00:07:17.864 --rc genhtml_function_coverage=1 00:07:17.864 --rc genhtml_legend=1 00:07:17.864 --rc geninfo_all_blocks=1 00:07:17.864 --rc geninfo_unexecuted_blocks=1 00:07:17.864 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:17.864 ' 00:07:17.864 17:16:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:17.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.864 --rc genhtml_branch_coverage=1 00:07:17.864 --rc genhtml_function_coverage=1 00:07:17.864 --rc genhtml_legend=1 00:07:17.864 --rc geninfo_all_blocks=1 00:07:17.864 --rc geninfo_unexecuted_blocks=1 00:07:17.864 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:17.864 ' 00:07:17.864 17:16:37 -- nvmf/run.sh@53 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:07:17.864 17:16:37 -- ../common.sh@8 -- # pids=() 00:07:17.864 17:16:37 -- nvmf/run.sh@55 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:17.864 17:16:37 -- nvmf/run.sh@56 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:17.864 17:16:37 -- nvmf/run.sh@56 -- # fuzz_num=25 00:07:17.864 17:16:37 -- nvmf/run.sh@57 -- # (( fuzz_num != 0 )) 00:07:17.864 17:16:37 -- nvmf/run.sh@59 -- # trap 'cleanup /tmp/llvm_fuzz*; exit 1' SIGINT SIGTERM EXIT 00:07:17.864 17:16:37 -- nvmf/run.sh@61 -- # mem_size=512 00:07:17.864 17:16:37 -- nvmf/run.sh@62 -- # [[ 1 -eq 1 ]] 00:07:17.864 17:16:37 -- nvmf/run.sh@63 -- # start_llvm_fuzz_short 25 1 00:07:17.864 17:16:37 -- ../common.sh@69 -- # local fuzz_num=25 00:07:17.864 17:16:37 -- ../common.sh@70 -- # local time=1 00:07:17.864 17:16:37 -- ../common.sh@72 -- # (( i = 0 )) 00:07:17.864 17:16:37 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:17.864 17:16:37 -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:17.864 17:16:37 -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:07:17.864 17:16:37 -- nvmf/run.sh@24 -- # local timen=1 00:07:17.864 17:16:37 -- nvmf/run.sh@25 -- # local core=0x1 00:07:17.864 17:16:37 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:17.864 17:16:37 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:07:17.864 17:16:37 -- nvmf/run.sh@29 -- # printf %02d 0 00:07:17.864 17:16:37 -- nvmf/run.sh@29 -- # port=4400 00:07:17.864 17:16:37 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:17.865 17:16:37 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:07:17.865 17:16:37 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:17.865 17:16:37 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 -r /var/tmp/spdk0.sock 00:07:17.865 [2024-11-09 17:16:37.589897] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:17.865 [2024-11-09 17:16:37.589969] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid365617 ] 00:07:17.865 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.124 [2024-11-09 17:16:37.772543] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.124 [2024-11-09 17:16:37.835815] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:18.124 [2024-11-09 17:16:37.835958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.383 [2024-11-09 17:16:37.894328] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.383 [2024-11-09 17:16:37.910710] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:07:18.383 INFO: Running with entropic power schedule (0xFF, 100). 00:07:18.383 INFO: Seed: 261977419 00:07:18.383 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:18.383 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:18.383 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:18.383 INFO: A corpus is not provided, starting from an empty corpus 00:07:18.383 #2 INITED exec/s: 0 rss: 61Mb 00:07:18.383 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:18.383 This may also happen if the target rejected all inputs we tried so far 00:07:18.383 [2024-11-09 17:16:37.955860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (5e) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:18.383 [2024-11-09 17:16:37.955889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.643 NEW_FUNC[1/670]: 0x43a858 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:07:18.643 NEW_FUNC[2/670]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:18.643 #9 NEW cov: 11546 ft: 11542 corp: 2/91b lim: 320 exec/s: 0 rss: 68Mb L: 90/90 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:18.643 [2024-11-09 17:16:38.267572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (5e) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:18.643 [2024-11-09 17:16:38.267623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.643 #10 NEW cov: 11659 ft: 12303 corp: 3/194b lim: 320 exec/s: 0 rss: 68Mb L: 103/103 MS: 1 CopyPart- 00:07:18.643 [2024-11-09 17:16:38.317528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (5e) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:18.643 [2024-11-09 17:16:38.317559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.643 #11 NEW cov: 11665 ft: 12526 corp: 4/297b lim: 320 exec/s: 0 rss: 68Mb L: 103/103 MS: 1 ChangeByte- 00:07:18.643 [2024-11-09 17:16:38.357653] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ecececec SGL TRANSPORT DATA BLOCK TRANSPORT 0xecececececececec 00:07:18.643 [2024-11-09 17:16:38.357684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.643 #13 NEW cov: 11767 ft: 12796 corp: 5/402b lim: 320 exec/s: 0 rss: 68Mb L: 105/105 MS: 2 InsertRepeatedBytes-InsertRepeatedBytes- 00:07:18.643 [2024-11-09 17:16:38.397700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (5e) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:18.643 [2024-11-09 17:16:38.397728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.902 #14 NEW cov: 11767 ft: 12862 corp: 6/528b lim: 320 exec/s: 0 rss: 68Mb L: 126/126 MS: 1 CopyPart- 00:07:18.902 [2024-11-09 17:16:38.437848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (5e) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:18.903 [2024-11-09 17:16:38.437875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.903 [2024-11-09 17:16:38.438007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:18.903 [2024-11-09 17:16:38.438024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.903 NEW_FUNC[1/1]: 0x12c5228 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2016 00:07:18.903 #15 NEW cov: 11798 ft: 13121 corp: 7/666b lim: 320 exec/s: 0 rss: 68Mb L: 138/138 MS: 1 CrossOver- 00:07:18.903 [2024-11-09 17:16:38.488000] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ecececec SGL TRANSPORT DATA BLOCK TRANSPORT 0xecececececececec 00:07:18.903 [2024-11-09 17:16:38.488029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.903 #16 NEW cov: 11798 ft: 13261 corp: 8/740b lim: 320 exec/s: 0 rss: 68Mb L: 74/138 MS: 1 EraseBytes- 00:07:18.903 [2024-11-09 17:16:38.528094] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ecececec SGL TRANSPORT DATA BLOCK TRANSPORT 0xecececececececec 00:07:18.903 [2024-11-09 17:16:38.528122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.903 #17 NEW cov: 11798 ft: 13353 corp: 9/814b lim: 320 exec/s: 0 rss: 68Mb L: 74/138 MS: 1 ShuffleBytes- 00:07:18.903 [2024-11-09 17:16:38.568211] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ecececec SGL TRANSPORT DATA BLOCK TRANSPORT 0xecececececececec 00:07:18.903 [2024-11-09 17:16:38.568242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.903 #23 NEW cov: 11798 ft: 13394 corp: 10/920b lim: 320 exec/s: 0 rss: 68Mb L: 106/138 MS: 1 InsertByte- 00:07:18.903 [2024-11-09 17:16:38.608268] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:18.903 [2024-11-09 17:16:38.608297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.903 #28 NEW cov: 11798 ft: 13419 corp: 11/1045b lim: 320 exec/s: 0 rss: 69Mb L: 125/138 MS: 5 EraseBytes-ShuffleBytes-ChangeBit-EraseBytes-InsertRepeatedBytes- 00:07:18.903 [2024-11-09 17:16:38.648423] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ecececec SGL TRANSPORT DATA BLOCK TRANSPORT 0xecececececececec 00:07:18.903 [2024-11-09 17:16:38.648453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.903 #34 NEW cov: 11798 ft: 13445 corp: 12/1119b lim: 320 exec/s: 0 rss: 69Mb L: 74/138 MS: 1 ShuffleBytes- 00:07:19.162 [2024-11-09 17:16:38.688453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (5e) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:19.162 [2024-11-09 17:16:38.688481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.162 #35 NEW cov: 11798 ft: 13473 corp: 13/1222b lim: 320 exec/s: 0 rss: 69Mb L: 103/138 MS: 1 ChangeBit- 00:07:19.162 [2024-11-09 17:16:38.728563] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ecececec SGL TRANSPORT DATA BLOCK TRANSPORT 0xecececececececec 00:07:19.162 [2024-11-09 17:16:38.728591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.162 [2024-11-09 17:16:38.728732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ec) qid:0 cid:5 nsid:ececec cdw10:00000000 cdw11:00000000 00:07:19.162 [2024-11-09 17:16:38.728749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.162 #36 NEW cov: 11799 ft: 13526 corp: 14/1368b lim: 320 exec/s: 0 rss: 69Mb L: 146/146 MS: 1 InsertRepeatedBytes- 00:07:19.162 [2024-11-09 17:16:38.769204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (5e) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:19.162 [2024-11-09 17:16:38.769233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.162 [2024-11-09 17:16:38.769369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:19.162 [2024-11-09 17:16:38.769387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.162 [2024-11-09 17:16:38.769516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:19.162 [2024-11-09 17:16:38.769537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.162 #37 NEW cov: 11799 ft: 13733 corp: 15/1561b lim: 320 exec/s: 0 rss: 69Mb L: 193/193 MS: 1 InsertRepeatedBytes- 00:07:19.162 [2024-11-09 17:16:38.808978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (5e) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:19.162 [2024-11-09 17:16:38.809005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.162 #38 NEW cov: 11799 ft: 13740 corp: 16/1664b lim: 320 exec/s: 0 rss: 69Mb L: 103/193 MS: 1 CopyPart- 00:07:19.162 [2024-11-09 17:16:38.849079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (5e) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:19.162 [2024-11-09 17:16:38.849108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.162 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:19.162 #39 NEW cov: 11822 ft: 13779 corp: 17/1767b lim: 320 exec/s: 0 rss: 69Mb L: 103/193 MS: 1 ShuffleBytes- 00:07:19.163 [2024-11-09 17:16:38.889233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (5e) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:19.163 [2024-11-09 17:16:38.889261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.163 #40 NEW cov: 11822 ft: 13787 corp: 18/1857b lim: 320 exec/s: 0 rss: 69Mb L: 90/193 MS: 1 ChangeByte- 00:07:19.163 [2024-11-09 17:16:38.929441] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ecececec SGL TRANSPORT DATA BLOCK TRANSPORT 0xecececececececec 00:07:19.163 [2024-11-09 17:16:38.929474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.422 #41 NEW cov: 11822 ft: 13881 corp: 19/1967b lim: 320 exec/s: 41 rss: 69Mb L: 110/193 MS: 1 CopyPart- 00:07:19.422 [2024-11-09 17:16:38.979252] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:19.422 [2024-11-09 17:16:38.979281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.422 [2024-11-09 17:16:38.979417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ff050000 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:19.422 [2024-11-09 17:16:38.979434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.422 #42 NEW cov: 11822 ft: 13894 corp: 20/2096b lim: 320 exec/s: 42 rss: 69Mb L: 129/193 MS: 1 CMP- DE: "\000\000\000\005"- 00:07:19.422 [2024-11-09 17:16:39.039674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (5e) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:19.422 [2024-11-09 17:16:39.039704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.422 #43 NEW cov: 11822 ft: 13987 corp: 21/2186b lim: 320 exec/s: 43 rss: 69Mb L: 90/193 MS: 1 ChangeBit- 00:07:19.422 [2024-11-09 17:16:39.089830] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ecececec SGL TRANSPORT DATA BLOCK TRANSPORT 0xecececececececec 00:07:19.422 [2024-11-09 17:16:39.089859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.422 #45 NEW cov: 11822 ft: 14016 corp: 22/2301b lim: 320 exec/s: 45 rss: 69Mb L: 115/193 MS: 2 EraseBytes-InsertRepeatedBytes- 00:07:19.422 [2024-11-09 17:16:39.149945] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ecececec SGL TRANSPORT DATA BLOCK TRANSPORT 0xecececececececec 00:07:19.422 [2024-11-09 17:16:39.149974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.422 #51 NEW cov: 11822 ft: 14070 corp: 23/2408b lim: 320 exec/s: 51 rss: 69Mb L: 107/193 MS: 1 InsertByte- 00:07:19.681 [2024-11-09 17:16:39.200170] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ecececec SGL TRANSPORT DATA BLOCK TRANSPORT 0xecececececececec 00:07:19.681 [2024-11-09 17:16:39.200201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.681 #52 NEW cov: 11822 ft: 14100 corp: 24/2513b lim: 320 exec/s: 52 rss: 69Mb L: 105/193 MS: 1 PersAutoDict- DE: "\000\000\000\005"- 00:07:19.681 [2024-11-09 17:16:39.250344] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ecececec SGL TRANSPORT DATA BLOCK TRANSPORT 0xecececececececec 00:07:19.681 [2024-11-09 17:16:39.250374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.681 #53 NEW cov: 11822 ft: 14137 corp: 25/2587b lim: 320 exec/s: 53 rss: 69Mb L: 74/193 MS: 1 ShuffleBytes- 00:07:19.681 [2024-11-09 17:16:39.300465] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ecececec SGL TRANSPORT DATA BLOCK TRANSPORT 0xecececececececec 00:07:19.681 [2024-11-09 17:16:39.300495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.681 #54 NEW cov: 11822 ft: 14167 corp: 26/2695b lim: 320 exec/s: 54 rss: 69Mb L: 108/193 MS: 1 InsertRepeatedBytes- 00:07:19.681 [2024-11-09 17:16:39.350738] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ecececec SGL TRANSPORT DATA BLOCK TRANSPORT 0xecececececececec 00:07:19.681 [2024-11-09 17:16:39.350771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.682 #55 NEW cov: 11822 ft: 14170 corp: 27/2769b lim: 320 exec/s: 55 rss: 69Mb L: 74/193 MS: 1 ChangeBit- 00:07:19.682 [2024-11-09 17:16:39.411116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (5e) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:19.682 [2024-11-09 17:16:39.411143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.682 [2024-11-09 17:16:39.411276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:19.682 [2024-11-09 17:16:39.411293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.682 #56 NEW cov: 11822 ft: 14196 corp: 28/2907b lim: 320 exec/s: 56 rss: 69Mb L: 138/193 MS: 1 CopyPart- 00:07:19.941 [2024-11-09 17:16:39.471037] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ecececec SGL TRANSPORT DATA BLOCK TRANSPORT 0xecececececececec 00:07:19.941 [2024-11-09 17:16:39.471068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.941 #57 NEW cov: 11822 ft: 14202 corp: 29/3023b lim: 320 exec/s: 57 rss: 70Mb L: 116/193 MS: 1 CMP- DE: "\020\000\000\000\000\000\000\000"- 00:07:19.941 [2024-11-09 17:16:39.531180] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ecececec SGL TRANSPORT DATA BLOCK TRANSPORT 0xecececececececec 00:07:19.941 [2024-11-09 17:16:39.531210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.942 #58 NEW cov: 11822 ft: 14220 corp: 30/3097b lim: 320 exec/s: 58 rss: 70Mb L: 74/193 MS: 1 ChangeBinInt- 00:07:19.942 [2024-11-09 17:16:39.570872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (5e) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:19.942 [2024-11-09 17:16:39.570899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.942 #59 NEW cov: 11822 ft: 14252 corp: 31/3196b lim: 320 exec/s: 59 rss: 70Mb L: 99/193 MS: 1 EraseBytes- 00:07:19.942 [2024-11-09 17:16:39.621654] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffff60ff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:19.942 [2024-11-09 17:16:39.621682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.942 [2024-11-09 17:16:39.621815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ff050000 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:19.942 [2024-11-09 17:16:39.621831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.942 #60 NEW cov: 11822 ft: 14262 corp: 32/3325b lim: 320 exec/s: 60 rss: 70Mb L: 129/193 MS: 1 ChangeByte- 00:07:19.942 [2024-11-09 17:16:39.661458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (5e) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:19.942 [2024-11-09 17:16:39.661486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.942 #61 NEW cov: 11822 ft: 14270 corp: 33/3451b lim: 320 exec/s: 61 rss: 70Mb L: 126/193 MS: 1 ShuffleBytes- 00:07:19.942 [2024-11-09 17:16:39.701862] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffff60ff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:19.942 [2024-11-09 17:16:39.701893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.942 [2024-11-09 17:16:39.702033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ff050000 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:19.942 [2024-11-09 17:16:39.702049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.201 #62 NEW cov: 11822 ft: 14284 corp: 34/3580b lim: 320 exec/s: 62 rss: 70Mb L: 129/193 MS: 1 ChangeBit- 00:07:20.201 [2024-11-09 17:16:39.741638] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ecececec SGL TRANSPORT DATA BLOCK TRANSPORT 0xecececececececec 00:07:20.201 [2024-11-09 17:16:39.741667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.201 #63 NEW cov: 11822 ft: 14291 corp: 35/3654b lim: 320 exec/s: 63 rss: 70Mb L: 74/193 MS: 1 ChangeBinInt- 00:07:20.201 [2024-11-09 17:16:39.792212] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ecececec SGL TRANSPORT DATA BLOCK TRANSPORT 0xecececececececec 00:07:20.201 [2024-11-09 17:16:39.792239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.201 [2024-11-09 17:16:39.792359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ec) qid:0 cid:5 nsid:ececec cdw10:00000000 cdw11:00000000 00:07:20.202 [2024-11-09 17:16:39.792375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.202 #64 NEW cov: 11822 ft: 14297 corp: 36/3800b lim: 320 exec/s: 64 rss: 70Mb L: 146/193 MS: 1 ChangeBit- 00:07:20.202 [2024-11-09 17:16:39.841612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (5e) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:20.202 [2024-11-09 17:16:39.841640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.202 #69 NEW cov: 11822 ft: 14326 corp: 37/3868b lim: 320 exec/s: 69 rss: 70Mb L: 68/193 MS: 5 EraseBytes-ShuffleBytes-ChangeBinInt-InsertByte-CopyPart- 00:07:20.202 [2024-11-09 17:16:39.882022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (5e) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:20.202 [2024-11-09 17:16:39.882050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.202 [2024-11-09 17:16:39.882183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:20.202 [2024-11-09 17:16:39.882200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.202 #70 NEW cov: 11822 ft: 14356 corp: 38/4053b lim: 320 exec/s: 70 rss: 70Mb L: 185/193 MS: 1 CopyPart- 00:07:20.202 [2024-11-09 17:16:39.922272] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ecececec SGL TRANSPORT DATA BLOCK TRANSPORT 0xacacacececececec 00:07:20.202 [2024-11-09 17:16:39.922300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.202 #71 NEW cov: 11822 ft: 14363 corp: 39/4138b lim: 320 exec/s: 35 rss: 70Mb L: 85/193 MS: 1 InsertRepeatedBytes- 00:07:20.202 #71 DONE cov: 11822 ft: 14363 corp: 39/4138b lim: 320 exec/s: 35 rss: 70Mb 00:07:20.202 ###### Recommended dictionary. ###### 00:07:20.202 "\000\000\000\005" # Uses: 1 00:07:20.202 "\020\000\000\000\000\000\000\000" # Uses: 0 00:07:20.202 ###### End of recommended dictionary. ###### 00:07:20.202 Done 71 runs in 2 second(s) 00:07:20.461 17:16:40 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_0.conf 00:07:20.461 17:16:40 -- ../common.sh@72 -- # (( i++ )) 00:07:20.461 17:16:40 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:20.461 17:16:40 -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:20.461 17:16:40 -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:07:20.461 17:16:40 -- nvmf/run.sh@24 -- # local timen=1 00:07:20.461 17:16:40 -- nvmf/run.sh@25 -- # local core=0x1 00:07:20.461 17:16:40 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:20.461 17:16:40 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:07:20.461 17:16:40 -- nvmf/run.sh@29 -- # printf %02d 1 00:07:20.461 17:16:40 -- nvmf/run.sh@29 -- # port=4401 00:07:20.461 17:16:40 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:20.461 17:16:40 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:07:20.461 17:16:40 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:20.461 17:16:40 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 -r /var/tmp/spdk1.sock 00:07:20.461 [2024-11-09 17:16:40.118556] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:20.461 [2024-11-09 17:16:40.118647] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid366112 ] 00:07:20.461 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.721 [2024-11-09 17:16:40.306506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.721 [2024-11-09 17:16:40.371432] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:20.721 [2024-11-09 17:16:40.371582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.721 [2024-11-09 17:16:40.429868] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:20.721 [2024-11-09 17:16:40.446223] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:07:20.721 INFO: Running with entropic power schedule (0xFF, 100). 00:07:20.721 INFO: Seed: 2795983329 00:07:20.980 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:20.980 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:20.980 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:20.980 INFO: A corpus is not provided, starting from an empty corpus 00:07:20.980 #2 INITED exec/s: 0 rss: 61Mb 00:07:20.980 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:20.980 This may also happen if the target rejected all inputs we tried so far 00:07:20.980 [2024-11-09 17:16:40.511197] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:20.980 [2024-11-09 17:16:40.511314] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:20.980 [2024-11-09 17:16:40.511511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.980 [2024-11-09 17:16:40.511544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.980 [2024-11-09 17:16:40.511598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.980 [2024-11-09 17:16:40.511611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.239 NEW_FUNC[1/671]: 0x43b158 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:07:21.239 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:21.240 #8 NEW cov: 11622 ft: 11623 corp: 2/16b lim: 30 exec/s: 0 rss: 68Mb L: 15/15 MS: 1 InsertRepeatedBytes- 00:07:21.240 [2024-11-09 17:16:40.832184] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:21.240 [2024-11-09 17:16:40.832333] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:21.240 [2024-11-09 17:16:40.832587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.240 [2024-11-09 17:16:40.832645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.240 [2024-11-09 17:16:40.832727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.240 [2024-11-09 17:16:40.832753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.240 #14 NEW cov: 11735 ft: 12323 corp: 3/29b lim: 30 exec/s: 0 rss: 68Mb L: 13/15 MS: 1 EraseBytes- 00:07:21.240 [2024-11-09 17:16:40.882118] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:21.240 [2024-11-09 17:16:40.882235] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:21.240 [2024-11-09 17:16:40.882460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.240 [2024-11-09 17:16:40.882486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.240 [2024-11-09 17:16:40.882546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.240 [2024-11-09 17:16:40.882561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.240 #15 NEW cov: 11741 ft: 12657 corp: 4/45b lim: 30 exec/s: 0 rss: 68Mb L: 16/16 MS: 1 InsertRepeatedBytes- 00:07:21.240 [2024-11-09 17:16:40.922222] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:21.240 [2024-11-09 17:16:40.922439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.240 [2024-11-09 17:16:40.922465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.240 #21 NEW cov: 11826 ft: 13339 corp: 5/56b lim: 30 exec/s: 0 rss: 68Mb L: 11/16 MS: 1 EraseBytes- 00:07:21.240 [2024-11-09 17:16:40.962322] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:21.240 [2024-11-09 17:16:40.962554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.240 [2024-11-09 17:16:40.962580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.240 #22 NEW cov: 11826 ft: 13425 corp: 6/67b lim: 30 exec/s: 0 rss: 69Mb L: 11/16 MS: 1 ShuffleBytes- 00:07:21.240 [2024-11-09 17:16:41.002455] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (11264) > buf size (4096) 00:07:21.240 [2024-11-09 17:16:41.002579] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:21.240 [2024-11-09 17:16:41.002787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff003b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.240 [2024-11-09 17:16:41.002814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.240 [2024-11-09 17:16:41.002868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.240 [2024-11-09 17:16:41.002882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.499 #23 NEW cov: 11849 ft: 13519 corp: 7/80b lim: 30 exec/s: 0 rss: 69Mb L: 13/16 MS: 1 CMP- DE: ";\000\000\000"- 00:07:21.499 [2024-11-09 17:16:41.042550] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:21.499 [2024-11-09 17:16:41.042783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.499 [2024-11-09 17:16:41.042809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.499 #24 NEW cov: 11849 ft: 13565 corp: 8/91b lim: 30 exec/s: 0 rss: 69Mb L: 11/16 MS: 1 ChangeBinInt- 00:07:21.499 [2024-11-09 17:16:41.082709] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:21.499 [2024-11-09 17:16:41.082845] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000f7ff 00:07:21.499 [2024-11-09 17:16:41.083053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.499 [2024-11-09 17:16:41.083079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.499 [2024-11-09 17:16:41.083134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.500 [2024-11-09 17:16:41.083149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.500 #25 NEW cov: 11849 ft: 13664 corp: 9/107b lim: 30 exec/s: 0 rss: 69Mb L: 16/16 MS: 1 ChangeBit- 00:07:21.500 [2024-11-09 17:16:41.122821] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:21.500 [2024-11-09 17:16:41.122953] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:07:21.500 [2024-11-09 17:16:41.123172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.500 [2024-11-09 17:16:41.123197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.500 [2024-11-09 17:16:41.123251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.500 [2024-11-09 17:16:41.123264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.500 #26 NEW cov: 11849 ft: 13846 corp: 10/121b lim: 30 exec/s: 0 rss: 69Mb L: 14/16 MS: 1 InsertByte- 00:07:21.500 [2024-11-09 17:16:41.163008] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:21.500 [2024-11-09 17:16:41.163143] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:07:21.500 [2024-11-09 17:16:41.163250] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:21.500 [2024-11-09 17:16:41.163458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.500 [2024-11-09 17:16:41.163485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.500 [2024-11-09 17:16:41.163544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.500 [2024-11-09 17:16:41.163559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.500 [2024-11-09 17:16:41.163615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:007e83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.500 [2024-11-09 17:16:41.163628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.500 #27 NEW cov: 11849 ft: 14191 corp: 11/139b lim: 30 exec/s: 0 rss: 69Mb L: 18/18 MS: 1 PersAutoDict- DE: ";\000\000\000"- 00:07:21.500 [2024-11-09 17:16:41.202997] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:07:21.500 [2024-11-09 17:16:41.203222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.500 [2024-11-09 17:16:41.203248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.500 #28 NEW cov: 11849 ft: 14208 corp: 12/147b lim: 30 exec/s: 0 rss: 69Mb L: 8/18 MS: 1 EraseBytes- 00:07:21.500 [2024-11-09 17:16:41.243159] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:21.500 [2024-11-09 17:16:41.243291] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (4100) > buf size (4096) 00:07:21.500 [2024-11-09 17:16:41.243497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.500 [2024-11-09 17:16:41.243528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.500 [2024-11-09 17:16:41.243585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:04000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.500 [2024-11-09 17:16:41.243599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.759 #29 NEW cov: 11849 ft: 14262 corp: 13/163b lim: 30 exec/s: 0 rss: 69Mb L: 16/18 MS: 1 ChangeBinInt- 00:07:21.759 [2024-11-09 17:16:41.283276] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff7e 00:07:21.759 [2024-11-09 17:16:41.283507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff830a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.759 [2024-11-09 17:16:41.283538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.759 #30 NEW cov: 11849 ft: 14278 corp: 14/171b lim: 30 exec/s: 0 rss: 69Mb L: 8/18 MS: 1 ShuffleBytes- 00:07:21.759 [2024-11-09 17:16:41.323394] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff7e 00:07:21.759 [2024-11-09 17:16:41.323630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff830a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.759 [2024-11-09 17:16:41.323657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.759 #31 NEW cov: 11849 ft: 14294 corp: 15/179b lim: 30 exec/s: 0 rss: 69Mb L: 8/18 MS: 1 ChangeBit- 00:07:21.759 [2024-11-09 17:16:41.363511] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:21.759 [2024-11-09 17:16:41.363769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.759 [2024-11-09 17:16:41.363793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.759 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:21.759 #32 NEW cov: 11872 ft: 14329 corp: 16/190b lim: 30 exec/s: 0 rss: 69Mb L: 11/18 MS: 1 ChangeBit- 00:07:21.759 [2024-11-09 17:16:41.413683] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:21.759 [2024-11-09 17:16:41.413803] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000f7ff 00:07:21.759 [2024-11-09 17:16:41.414010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.759 [2024-11-09 17:16:41.414036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.759 [2024-11-09 17:16:41.414091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.759 [2024-11-09 17:16:41.414108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.759 #33 NEW cov: 11872 ft: 14398 corp: 17/206b lim: 30 exec/s: 0 rss: 69Mb L: 16/18 MS: 1 ChangeBit- 00:07:21.760 [2024-11-09 17:16:41.453783] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:21.760 [2024-11-09 17:16:41.453915] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:21.760 [2024-11-09 17:16:41.454139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83fb cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.760 [2024-11-09 17:16:41.454165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.760 [2024-11-09 17:16:41.454221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.760 [2024-11-09 17:16:41.454235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.760 #34 NEW cov: 11872 ft: 14408 corp: 18/222b lim: 30 exec/s: 0 rss: 69Mb L: 16/18 MS: 1 ChangeBit- 00:07:21.760 [2024-11-09 17:16:41.493883] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff71 00:07:21.760 [2024-11-09 17:16:41.494097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff830a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.760 [2024-11-09 17:16:41.494127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.760 #38 NEW cov: 11872 ft: 14437 corp: 19/228b lim: 30 exec/s: 38 rss: 69Mb L: 6/18 MS: 4 CrossOver-CrossOver-CopyPart-InsertByte- 00:07:22.019 [2024-11-09 17:16:41.534046] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:07:22.020 [2024-11-09 17:16:41.534163] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:22.020 [2024-11-09 17:16:41.534369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.020 [2024-11-09 17:16:41.534395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.020 [2024-11-09 17:16:41.534450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.020 [2024-11-09 17:16:41.534464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.020 #39 NEW cov: 11872 ft: 14527 corp: 20/241b lim: 30 exec/s: 39 rss: 69Mb L: 13/18 MS: 1 InsertRepeatedBytes- 00:07:22.020 [2024-11-09 17:16:41.574125] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1000032ff 00:07:22.020 [2024-11-09 17:16:41.574338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff817e cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.020 [2024-11-09 17:16:41.574370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.020 #41 NEW cov: 11872 ft: 14536 corp: 21/247b lim: 30 exec/s: 41 rss: 69Mb L: 6/18 MS: 2 EraseBytes-InsertByte- 00:07:22.020 [2024-11-09 17:16:41.614321] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:22.020 [2024-11-09 17:16:41.614433] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:22.020 [2024-11-09 17:16:41.614563] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:22.020 [2024-11-09 17:16:41.614770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.020 [2024-11-09 17:16:41.614800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.020 [2024-11-09 17:16:41.614857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff81ff cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.020 [2024-11-09 17:16:41.614871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.020 [2024-11-09 17:16:41.614902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:41ff83f7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.020 [2024-11-09 17:16:41.614915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.020 #42 NEW cov: 11872 ft: 14565 corp: 22/267b lim: 30 exec/s: 42 rss: 69Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:07:22.020 [2024-11-09 17:16:41.654467] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:22.020 [2024-11-09 17:16:41.654603] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:22.020 [2024-11-09 17:16:41.654712] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff41 00:07:22.020 [2024-11-09 17:16:41.654938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.020 [2024-11-09 17:16:41.654964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.020 [2024-11-09 17:16:41.655016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff81ff cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.020 [2024-11-09 17:16:41.655030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.020 [2024-11-09 17:16:41.655085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.020 [2024-11-09 17:16:41.655098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.020 #43 NEW cov: 11872 ft: 14573 corp: 23/287b lim: 30 exec/s: 43 rss: 70Mb L: 20/20 MS: 1 ShuffleBytes- 00:07:22.020 [2024-11-09 17:16:41.694596] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:22.020 [2024-11-09 17:16:41.694728] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261684) > buf size (4096) 00:07:22.020 [2024-11-09 17:16:41.694840] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (143924) > buf size (4096) 00:07:22.020 [2024-11-09 17:16:41.694951] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xff 00:07:22.020 [2024-11-09 17:16:41.695173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.020 [2024-11-09 17:16:41.695199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.020 [2024-11-09 17:16:41.695255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.020 [2024-11-09 17:16:41.695269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.020 [2024-11-09 17:16:41.695322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:8c8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.020 [2024-11-09 17:16:41.695336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.020 [2024-11-09 17:16:41.695390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:8c8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.020 [2024-11-09 17:16:41.695406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.020 #44 NEW cov: 11872 ft: 15071 corp: 24/312b lim: 30 exec/s: 44 rss: 70Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:07:22.020 [2024-11-09 17:16:41.744674] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:22.020 [2024-11-09 17:16:41.744787] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:22.020 [2024-11-09 17:16:41.744997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.020 [2024-11-09 17:16:41.745023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.020 [2024-11-09 17:16:41.745081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.020 [2024-11-09 17:16:41.745095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.020 #45 NEW cov: 11872 ft: 15086 corp: 25/325b lim: 30 exec/s: 45 rss: 70Mb L: 13/25 MS: 1 ChangeBit- 00:07:22.020 [2024-11-09 17:16:41.774732] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:22.020 [2024-11-09 17:16:41.774972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.020 [2024-11-09 17:16:41.774998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.279 #46 NEW cov: 11872 ft: 15130 corp: 26/336b lim: 30 exec/s: 46 rss: 70Mb L: 11/25 MS: 1 ChangeBinInt- 00:07:22.279 [2024-11-09 17:16:41.814869] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:22.279 [2024-11-09 17:16:41.815001] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:07:22.279 [2024-11-09 17:16:41.815206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.279 [2024-11-09 17:16:41.815232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.279 [2024-11-09 17:16:41.815285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff0204 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.279 [2024-11-09 17:16:41.815300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.279 #47 NEW cov: 11872 ft: 15148 corp: 27/348b lim: 30 exec/s: 47 rss: 70Mb L: 12/25 MS: 1 CrossOver- 00:07:22.279 [2024-11-09 17:16:41.854966] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:22.279 [2024-11-09 17:16:41.855205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.279 [2024-11-09 17:16:41.855231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.279 #48 NEW cov: 11872 ft: 15162 corp: 28/359b lim: 30 exec/s: 48 rss: 70Mb L: 11/25 MS: 1 CopyPart- 00:07:22.279 [2024-11-09 17:16:41.895100] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1000032ff 00:07:22.279 [2024-11-09 17:16:41.895307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff817e cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.279 [2024-11-09 17:16:41.895332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.279 #49 NEW cov: 11872 ft: 15184 corp: 29/365b lim: 30 exec/s: 49 rss: 70Mb L: 6/25 MS: 1 ShuffleBytes- 00:07:22.279 [2024-11-09 17:16:41.935296] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:22.279 [2024-11-09 17:16:41.935417] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:07:22.279 [2024-11-09 17:16:41.935534] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:22.279 [2024-11-09 17:16:41.935774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.279 [2024-11-09 17:16:41.935800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.279 [2024-11-09 17:16:41.935856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.279 [2024-11-09 17:16:41.935871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.279 [2024-11-09 17:16:41.935925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:007e83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.279 [2024-11-09 17:16:41.935939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.279 #50 NEW cov: 11872 ft: 15192 corp: 30/383b lim: 30 exec/s: 50 rss: 70Mb L: 18/25 MS: 1 ShuffleBytes- 00:07:22.279 [2024-11-09 17:16:41.975339] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffef 00:07:22.279 [2024-11-09 17:16:41.975574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3b000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.280 [2024-11-09 17:16:41.975600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.280 #54 NEW cov: 11872 ft: 15206 corp: 31/390b lim: 30 exec/s: 54 rss: 70Mb L: 7/25 MS: 4 EraseBytes-ShuffleBytes-ChangeBit-PersAutoDict- DE: ";\000\000\000"- 00:07:22.280 [2024-11-09 17:16:42.015496] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3000000ff 00:07:22.280 [2024-11-09 17:16:42.015621] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:22.280 [2024-11-09 17:16:42.015829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff833b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.280 [2024-11-09 17:16:42.015855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.280 [2024-11-09 17:16:42.015912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:000083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.280 [2024-11-09 17:16:42.015926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.280 #55 NEW cov: 11872 ft: 15210 corp: 32/403b lim: 30 exec/s: 55 rss: 70Mb L: 13/25 MS: 1 ShuffleBytes- 00:07:22.539 [2024-11-09 17:16:42.055609] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:22.539 [2024-11-09 17:16:42.055742] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:07:22.539 [2024-11-09 17:16:42.055953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.539 [2024-11-09 17:16:42.055980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.539 [2024-11-09 17:16:42.056039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff0229 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.539 [2024-11-09 17:16:42.056053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.539 #56 NEW cov: 11872 ft: 15233 corp: 33/415b lim: 30 exec/s: 56 rss: 70Mb L: 12/25 MS: 1 ChangeByte- 00:07:22.539 [2024-11-09 17:16:42.095710] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:07:22.539 [2024-11-09 17:16:42.095835] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:22.539 [2024-11-09 17:16:42.096042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.539 [2024-11-09 17:16:42.096068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.539 [2024-11-09 17:16:42.096126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.539 [2024-11-09 17:16:42.096140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.539 #57 NEW cov: 11872 ft: 15256 corp: 34/428b lim: 30 exec/s: 57 rss: 70Mb L: 13/25 MS: 1 ShuffleBytes- 00:07:22.539 [2024-11-09 17:16:42.135814] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffef 00:07:22.539 [2024-11-09 17:16:42.136030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3b000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.539 [2024-11-09 17:16:42.136055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.539 #58 NEW cov: 11872 ft: 15264 corp: 35/435b lim: 30 exec/s: 58 rss: 70Mb L: 7/25 MS: 1 ShuffleBytes- 00:07:22.539 [2024-11-09 17:16:42.175911] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004532 00:07:22.539 [2024-11-09 17:16:42.176122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff817e cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.539 [2024-11-09 17:16:42.176164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.539 #59 NEW cov: 11872 ft: 15294 corp: 36/442b lim: 30 exec/s: 59 rss: 70Mb L: 7/25 MS: 1 InsertByte- 00:07:22.539 [2024-11-09 17:16:42.216098] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:22.539 [2024-11-09 17:16:42.216213] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000ffff 00:07:22.539 [2024-11-09 17:16:42.216421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.539 [2024-11-09 17:16:42.216447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.539 [2024-11-09 17:16:42.216503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff8104 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.539 [2024-11-09 17:16:42.216516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.539 #60 NEW cov: 11872 ft: 15302 corp: 37/454b lim: 30 exec/s: 60 rss: 70Mb L: 12/25 MS: 1 ChangeByte- 00:07:22.539 [2024-11-09 17:16:42.256192] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:22.539 [2024-11-09 17:16:42.256308] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xff 00:07:22.539 [2024-11-09 17:16:42.256519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.539 [2024-11-09 17:16:42.256549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.539 [2024-11-09 17:16:42.256605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff3b0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.539 [2024-11-09 17:16:42.256618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.540 #61 NEW cov: 11872 ft: 15346 corp: 38/469b lim: 30 exec/s: 61 rss: 70Mb L: 15/25 MS: 1 PersAutoDict- DE: ";\000\000\000"- 00:07:22.540 [2024-11-09 17:16:42.296326] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xff6f 00:07:22.540 [2024-11-09 17:16:42.296547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3b000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.540 [2024-11-09 17:16:42.296573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.800 #62 NEW cov: 11872 ft: 15363 corp: 39/476b lim: 30 exec/s: 62 rss: 70Mb L: 7/25 MS: 1 ChangeBit- 00:07:22.800 [2024-11-09 17:16:42.336453] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff3b 00:07:22.800 [2024-11-09 17:16:42.336579] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:07:22.800 [2024-11-09 17:16:42.336794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.800 [2024-11-09 17:16:42.336819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.800 [2024-11-09 17:16:42.336877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.800 [2024-11-09 17:16:42.336890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.800 #63 NEW cov: 11872 ft: 15370 corp: 40/490b lim: 30 exec/s: 63 rss: 70Mb L: 14/25 MS: 1 PersAutoDict- DE: ";\000\000\000"- 00:07:22.800 [2024-11-09 17:16:42.376607] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:22.800 [2024-11-09 17:16:42.376740] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:07:22.800 [2024-11-09 17:16:42.376849] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:22.800 [2024-11-09 17:16:42.377062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.800 [2024-11-09 17:16:42.377088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.800 [2024-11-09 17:16:42.377145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.800 [2024-11-09 17:16:42.377159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.800 [2024-11-09 17:16:42.377214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:007e83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.800 [2024-11-09 17:16:42.377228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.800 #64 NEW cov: 11872 ft: 15373 corp: 41/508b lim: 30 exec/s: 64 rss: 70Mb L: 18/25 MS: 1 ChangeBit- 00:07:22.800 [2024-11-09 17:16:42.416668] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:22.800 [2024-11-09 17:16:42.416785] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000ffff 00:07:22.800 [2024-11-09 17:16:42.417011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.800 [2024-11-09 17:16:42.417037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.800 [2024-11-09 17:16:42.417095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff8104 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.800 [2024-11-09 17:16:42.417109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.800 #65 NEW cov: 11872 ft: 15380 corp: 42/521b lim: 30 exec/s: 65 rss: 70Mb L: 13/25 MS: 1 InsertByte- 00:07:22.800 [2024-11-09 17:16:42.456814] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000aef 00:07:22.800 [2024-11-09 17:16:42.456949] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:07:22.800 [2024-11-09 17:16:42.457060] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:22.800 [2024-11-09 17:16:42.457288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.800 [2024-11-09 17:16:42.457315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.800 [2024-11-09 17:16:42.457370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.800 [2024-11-09 17:16:42.457384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.800 [2024-11-09 17:16:42.457436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:007e83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.800 [2024-11-09 17:16:42.457450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.800 #66 NEW cov: 11872 ft: 15386 corp: 43/539b lim: 30 exec/s: 66 rss: 70Mb L: 18/25 MS: 1 ShuffleBytes- 00:07:22.800 [2024-11-09 17:16:42.496918] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:22.800 [2024-11-09 17:16:42.497049] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:22.800 [2024-11-09 17:16:42.497257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.800 [2024-11-09 17:16:42.497282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.800 [2024-11-09 17:16:42.497340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.800 [2024-11-09 17:16:42.497355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.800 #67 NEW cov: 11872 ft: 15392 corp: 44/551b lim: 30 exec/s: 33 rss: 70Mb L: 12/25 MS: 1 CrossOver- 00:07:22.800 #67 DONE cov: 11872 ft: 15392 corp: 44/551b lim: 30 exec/s: 33 rss: 70Mb 00:07:22.800 ###### Recommended dictionary. ###### 00:07:22.800 ";\000\000\000" # Uses: 4 00:07:22.800 ###### End of recommended dictionary. ###### 00:07:22.800 Done 67 runs in 2 second(s) 00:07:23.060 17:16:42 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_1.conf 00:07:23.060 17:16:42 -- ../common.sh@72 -- # (( i++ )) 00:07:23.060 17:16:42 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:23.060 17:16:42 -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:23.060 17:16:42 -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:07:23.060 17:16:42 -- nvmf/run.sh@24 -- # local timen=1 00:07:23.060 17:16:42 -- nvmf/run.sh@25 -- # local core=0x1 00:07:23.060 17:16:42 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:23.060 17:16:42 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:07:23.060 17:16:42 -- nvmf/run.sh@29 -- # printf %02d 2 00:07:23.060 17:16:42 -- nvmf/run.sh@29 -- # port=4402 00:07:23.060 17:16:42 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:23.060 17:16:42 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:07:23.060 17:16:42 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:23.060 17:16:42 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 -r /var/tmp/spdk2.sock 00:07:23.060 [2024-11-09 17:16:42.682185] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:23.060 [2024-11-09 17:16:42.682257] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid366649 ] 00:07:23.060 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.319 [2024-11-09 17:16:42.857058] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.320 [2024-11-09 17:16:42.920785] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:23.320 [2024-11-09 17:16:42.920928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.320 [2024-11-09 17:16:42.978689] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:23.320 [2024-11-09 17:16:42.995079] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:07:23.320 INFO: Running with entropic power schedule (0xFF, 100). 00:07:23.320 INFO: Seed: 1051016955 00:07:23.320 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:23.320 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:23.320 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:23.320 INFO: A corpus is not provided, starting from an empty corpus 00:07:23.320 #2 INITED exec/s: 0 rss: 60Mb 00:07:23.320 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:23.320 This may also happen if the target rejected all inputs we tried so far 00:07:23.320 [2024-11-09 17:16:43.060122] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.320 [2024-11-09 17:16:43.060245] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.320 [2024-11-09 17:16:43.060351] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.320 [2024-11-09 17:16:43.060457] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.320 [2024-11-09 17:16:43.060666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.320 [2024-11-09 17:16:43.060700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.320 [2024-11-09 17:16:43.060755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.320 [2024-11-09 17:16:43.060770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.320 [2024-11-09 17:16:43.060822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.320 [2024-11-09 17:16:43.060837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.320 [2024-11-09 17:16:43.060890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.320 [2024-11-09 17:16:43.060904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.892 NEW_FUNC[1/670]: 0x43db78 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:07:23.892 NEW_FUNC[2/670]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:23.892 #8 NEW cov: 11579 ft: 11580 corp: 2/31b lim: 35 exec/s: 0 rss: 68Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:07:23.892 [2024-11-09 17:16:43.381055] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.892 [2024-11-09 17:16:43.381205] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.892 [2024-11-09 17:16:43.381318] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.892 [2024-11-09 17:16:43.381427] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.892 [2024-11-09 17:16:43.381656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.892 [2024-11-09 17:16:43.381713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.892 [2024-11-09 17:16:43.381796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.892 [2024-11-09 17:16:43.381825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.892 [2024-11-09 17:16:43.381903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.892 [2024-11-09 17:16:43.381932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.892 [2024-11-09 17:16:43.382008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.892 [2024-11-09 17:16:43.382036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.892 #10 NEW cov: 11692 ft: 12238 corp: 3/61b lim: 35 exec/s: 0 rss: 68Mb L: 30/30 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:23.892 [2024-11-09 17:16:43.420938] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.892 [2024-11-09 17:16:43.421078] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.892 [2024-11-09 17:16:43.421183] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.892 [2024-11-09 17:16:43.421283] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.892 [2024-11-09 17:16:43.421493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.892 [2024-11-09 17:16:43.421520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.892 [2024-11-09 17:16:43.421577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.892 [2024-11-09 17:16:43.421591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.892 [2024-11-09 17:16:43.421642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.892 [2024-11-09 17:16:43.421657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.892 [2024-11-09 17:16:43.421709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.892 [2024-11-09 17:16:43.421723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.892 #11 NEW cov: 11698 ft: 12469 corp: 4/91b lim: 35 exec/s: 0 rss: 68Mb L: 30/30 MS: 1 ChangeBinInt- 00:07:23.893 [2024-11-09 17:16:43.460972] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.893 [2024-11-09 17:16:43.461084] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.893 [2024-11-09 17:16:43.461282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.893 [2024-11-09 17:16:43.461309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.893 [2024-11-09 17:16:43.461361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.893 [2024-11-09 17:16:43.461376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.893 #12 NEW cov: 11783 ft: 13229 corp: 5/107b lim: 35 exec/s: 0 rss: 68Mb L: 16/30 MS: 1 EraseBytes- 00:07:23.893 [2024-11-09 17:16:43.501140] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.893 [2024-11-09 17:16:43.501252] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.893 [2024-11-09 17:16:43.501355] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.893 [2024-11-09 17:16:43.501455] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.893 [2024-11-09 17:16:43.501663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.893 [2024-11-09 17:16:43.501689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.893 [2024-11-09 17:16:43.501740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.893 [2024-11-09 17:16:43.501755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.893 [2024-11-09 17:16:43.501805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.893 [2024-11-09 17:16:43.501819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.893 [2024-11-09 17:16:43.501869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.893 [2024-11-09 17:16:43.501883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.893 #13 NEW cov: 11783 ft: 13374 corp: 6/138b lim: 35 exec/s: 0 rss: 68Mb L: 31/31 MS: 1 CrossOver- 00:07:23.893 [2024-11-09 17:16:43.541281] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.893 [2024-11-09 17:16:43.541393] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.893 [2024-11-09 17:16:43.541496] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.893 [2024-11-09 17:16:43.541792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.893 [2024-11-09 17:16:43.541819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.893 [2024-11-09 17:16:43.541871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.893 [2024-11-09 17:16:43.541886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.893 [2024-11-09 17:16:43.541939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.893 [2024-11-09 17:16:43.541953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.893 [2024-11-09 17:16:43.542006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000020 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.893 [2024-11-09 17:16:43.542019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.893 #14 NEW cov: 11793 ft: 13477 corp: 7/168b lim: 35 exec/s: 0 rss: 68Mb L: 30/31 MS: 1 ChangeBit- 00:07:23.893 [2024-11-09 17:16:43.581607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f3f3000a cdw11:f300f3f3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.893 [2024-11-09 17:16:43.581632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.893 [2024-11-09 17:16:43.581684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f3f300f3 cdw11:f300f3f3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.893 [2024-11-09 17:16:43.581697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.893 #16 NEW cov: 11793 ft: 13590 corp: 8/185b lim: 35 exec/s: 0 rss: 68Mb L: 17/31 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:23.893 [2024-11-09 17:16:43.621469] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.893 [2024-11-09 17:16:43.621593] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.893 [2024-11-09 17:16:43.621769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.893 [2024-11-09 17:16:43.621796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.893 [2024-11-09 17:16:43.621849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000cf00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.893 [2024-11-09 17:16:43.621864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.893 #17 NEW cov: 11793 ft: 13681 corp: 9/201b lim: 35 exec/s: 0 rss: 68Mb L: 16/31 MS: 1 ChangeByte- 00:07:24.252 [2024-11-09 17:16:43.661931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f3f3000a cdw11:f300f3f3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.252 [2024-11-09 17:16:43.661957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.252 [2024-11-09 17:16:43.662010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f3f300cb cdw11:f300f3f3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.252 [2024-11-09 17:16:43.662024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.252 #18 NEW cov: 11793 ft: 13772 corp: 10/218b lim: 35 exec/s: 0 rss: 69Mb L: 17/31 MS: 1 ChangeByte- 00:07:24.252 [2024-11-09 17:16:43.701707] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.252 [2024-11-09 17:16:43.701820] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.252 [2024-11-09 17:16:43.702126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.252 [2024-11-09 17:16:43.702153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.252 [2024-11-09 17:16:43.702204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ff00cfff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.252 [2024-11-09 17:16:43.702220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.252 [2024-11-09 17:16:43.702275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.252 [2024-11-09 17:16:43.702289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.252 #19 NEW cov: 11793 ft: 13995 corp: 11/244b lim: 35 exec/s: 0 rss: 69Mb L: 26/31 MS: 1 InsertRepeatedBytes- 00:07:24.252 [2024-11-09 17:16:43.741896] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.252 [2024-11-09 17:16:43.742010] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.252 [2024-11-09 17:16:43.742112] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.252 [2024-11-09 17:16:43.742308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.252 [2024-11-09 17:16:43.742333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.252 [2024-11-09 17:16:43.742386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.252 [2024-11-09 17:16:43.742401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.252 [2024-11-09 17:16:43.742453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.252 [2024-11-09 17:16:43.742467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.252 [2024-11-09 17:16:43.742516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.252 [2024-11-09 17:16:43.742535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.252 #20 NEW cov: 11793 ft: 14022 corp: 12/274b lim: 35 exec/s: 0 rss: 69Mb L: 30/31 MS: 1 ChangeBit- 00:07:24.252 [2024-11-09 17:16:43.781922] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.252 [2024-11-09 17:16:43.782051] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.252 [2024-11-09 17:16:43.782156] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.252 [2024-11-09 17:16:43.782259] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.252 [2024-11-09 17:16:43.782460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.252 [2024-11-09 17:16:43.782487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.252 [2024-11-09 17:16:43.782543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.252 [2024-11-09 17:16:43.782559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.252 [2024-11-09 17:16:43.782610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.252 [2024-11-09 17:16:43.782625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.252 [2024-11-09 17:16:43.782676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.252 [2024-11-09 17:16:43.782694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.252 #23 NEW cov: 11793 ft: 14040 corp: 13/303b lim: 35 exec/s: 0 rss: 69Mb L: 29/31 MS: 3 ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:07:24.252 [2024-11-09 17:16:43.812038] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.252 [2024-11-09 17:16:43.812151] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.252 [2024-11-09 17:16:43.812440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.252 [2024-11-09 17:16:43.812467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.252 [2024-11-09 17:16:43.812518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ff008fff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.253 [2024-11-09 17:16:43.812538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.253 [2024-11-09 17:16:43.812590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.253 [2024-11-09 17:16:43.812603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.253 #24 NEW cov: 11793 ft: 14125 corp: 14/329b lim: 35 exec/s: 0 rss: 69Mb L: 26/31 MS: 1 ChangeBit- 00:07:24.253 [2024-11-09 17:16:43.852148] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.253 [2024-11-09 17:16:43.852263] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.253 [2024-11-09 17:16:43.852453] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.253 [2024-11-09 17:16:43.852663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.253 [2024-11-09 17:16:43.852689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.253 [2024-11-09 17:16:43.852741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.253 [2024-11-09 17:16:43.852756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.253 [2024-11-09 17:16:43.852807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:491e004e cdw11:00000200 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.253 [2024-11-09 17:16:43.852820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.253 [2024-11-09 17:16:43.852871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.253 [2024-11-09 17:16:43.852886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.253 #25 NEW cov: 11793 ft: 14148 corp: 15/359b lim: 35 exec/s: 0 rss: 69Mb L: 30/31 MS: 1 CMP- DE: "NI\036\002\000\000\000\000"- 00:07:24.253 [2024-11-09 17:16:43.892253] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.253 [2024-11-09 17:16:43.892364] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.253 [2024-11-09 17:16:43.892472] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.253 [2024-11-09 17:16:43.892594] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.253 [2024-11-09 17:16:43.892790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.253 [2024-11-09 17:16:43.892820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.253 [2024-11-09 17:16:43.892873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3d000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.253 [2024-11-09 17:16:43.892887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.253 [2024-11-09 17:16:43.892939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.253 [2024-11-09 17:16:43.892955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.253 [2024-11-09 17:16:43.893005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.253 [2024-11-09 17:16:43.893020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.253 #26 NEW cov: 11793 ft: 14167 corp: 16/389b lim: 35 exec/s: 0 rss: 69Mb L: 30/31 MS: 1 ChangeByte- 00:07:24.253 [2024-11-09 17:16:43.932372] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.253 [2024-11-09 17:16:43.932597] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.253 [2024-11-09 17:16:43.932704] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.253 [2024-11-09 17:16:43.932912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.253 [2024-11-09 17:16:43.932939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.253 [2024-11-09 17:16:43.932992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0000007a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.253 [2024-11-09 17:16:43.933006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.253 [2024-11-09 17:16:43.933055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:4e490000 cdw11:00001e02 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.253 [2024-11-09 17:16:43.933070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.253 [2024-11-09 17:16:43.933120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.253 [2024-11-09 17:16:43.933134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.253 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:24.253 #27 NEW cov: 11816 ft: 14201 corp: 17/420b lim: 35 exec/s: 0 rss: 69Mb L: 31/31 MS: 1 InsertByte- 00:07:24.253 [2024-11-09 17:16:43.982542] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.253 [2024-11-09 17:16:43.982658] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.253 [2024-11-09 17:16:43.982761] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.253 [2024-11-09 17:16:43.982864] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.253 [2024-11-09 17:16:43.983066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.253 [2024-11-09 17:16:43.983096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.253 [2024-11-09 17:16:43.983150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.253 [2024-11-09 17:16:43.983166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.253 [2024-11-09 17:16:43.983219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.253 [2024-11-09 17:16:43.983233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.253 [2024-11-09 17:16:43.983284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.253 [2024-11-09 17:16:43.983299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.513 #28 NEW cov: 11816 ft: 14223 corp: 18/450b lim: 35 exec/s: 0 rss: 69Mb L: 30/31 MS: 1 CrossOver- 00:07:24.513 [2024-11-09 17:16:44.022629] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.513 [2024-11-09 17:16:44.022769] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.514 [2024-11-09 17:16:44.022875] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.514 [2024-11-09 17:16:44.023080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.514 [2024-11-09 17:16:44.023107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.514 [2024-11-09 17:16:44.023159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.514 [2024-11-09 17:16:44.023174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.514 [2024-11-09 17:16:44.023225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000020 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.514 [2024-11-09 17:16:44.023240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.514 #29 NEW cov: 11816 ft: 14240 corp: 19/477b lim: 35 exec/s: 29 rss: 69Mb L: 27/31 MS: 1 EraseBytes- 00:07:24.514 [2024-11-09 17:16:44.062732] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.514 [2024-11-09 17:16:44.062858] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.514 [2024-11-09 17:16:44.062964] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.514 [2024-11-09 17:16:44.063069] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.514 [2024-11-09 17:16:44.063277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.514 [2024-11-09 17:16:44.063304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.514 [2024-11-09 17:16:44.063355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.514 [2024-11-09 17:16:44.063372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.514 [2024-11-09 17:16:44.063425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.514 [2024-11-09 17:16:44.063443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.514 [2024-11-09 17:16:44.063494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.514 [2024-11-09 17:16:44.063509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.514 #30 NEW cov: 11816 ft: 14262 corp: 20/508b lim: 35 exec/s: 30 rss: 69Mb L: 31/31 MS: 1 ChangeBit- 00:07:24.514 [2024-11-09 17:16:44.102865] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.514 [2024-11-09 17:16:44.102980] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.514 [2024-11-09 17:16:44.103085] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.514 [2024-11-09 17:16:44.103187] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.514 [2024-11-09 17:16:44.103411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.514 [2024-11-09 17:16:44.103437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.514 [2024-11-09 17:16:44.103487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.514 [2024-11-09 17:16:44.103504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.514 [2024-11-09 17:16:44.103551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:20000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.514 [2024-11-09 17:16:44.103566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.514 [2024-11-09 17:16:44.103617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.514 [2024-11-09 17:16:44.103633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.514 #31 NEW cov: 11816 ft: 14281 corp: 21/536b lim: 35 exec/s: 31 rss: 69Mb L: 28/31 MS: 1 InsertByte- 00:07:24.514 [2024-11-09 17:16:44.142943] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.514 [2024-11-09 17:16:44.143349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00fd0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.514 [2024-11-09 17:16:44.143375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.514 [2024-11-09 17:16:44.143428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00cfff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.514 [2024-11-09 17:16:44.143442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.514 [2024-11-09 17:16:44.143491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.514 [2024-11-09 17:16:44.143504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.514 #32 NEW cov: 11816 ft: 14318 corp: 22/562b lim: 35 exec/s: 32 rss: 69Mb L: 26/31 MS: 1 ChangeBinInt- 00:07:24.514 [2024-11-09 17:16:44.183020] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.514 [2024-11-09 17:16:44.183156] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.514 [2024-11-09 17:16:44.183355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.514 [2024-11-09 17:16:44.183382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.514 [2024-11-09 17:16:44.183435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.514 [2024-11-09 17:16:44.183451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.514 #33 NEW cov: 11816 ft: 14364 corp: 23/577b lim: 35 exec/s: 33 rss: 69Mb L: 15/31 MS: 1 EraseBytes- 00:07:24.514 [2024-11-09 17:16:44.223187] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.514 [2024-11-09 17:16:44.223301] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.514 [2024-11-09 17:16:44.223404] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.514 [2024-11-09 17:16:44.223505] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.514 [2024-11-09 17:16:44.223740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.514 [2024-11-09 17:16:44.223767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.514 [2024-11-09 17:16:44.223821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.514 [2024-11-09 17:16:44.223836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.514 [2024-11-09 17:16:44.223881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:56000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.514 [2024-11-09 17:16:44.223896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.514 [2024-11-09 17:16:44.223947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.514 [2024-11-09 17:16:44.223961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.514 #34 NEW cov: 11816 ft: 14370 corp: 24/607b lim: 35 exec/s: 34 rss: 69Mb L: 30/31 MS: 1 ChangeByte- 00:07:24.514 [2024-11-09 17:16:44.253248] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.514 [2024-11-09 17:16:44.253359] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.514 [2024-11-09 17:16:44.253463] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.514 [2024-11-09 17:16:44.253662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.514 [2024-11-09 17:16:44.253688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.514 [2024-11-09 17:16:44.253744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.514 [2024-11-09 17:16:44.253759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.514 [2024-11-09 17:16:44.253810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.514 [2024-11-09 17:16:44.253830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.514 #35 NEW cov: 11816 ft: 14415 corp: 25/629b lim: 35 exec/s: 35 rss: 69Mb L: 22/31 MS: 1 EraseBytes- 00:07:24.774 [2024-11-09 17:16:44.293404] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.775 [2024-11-09 17:16:44.293515] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.775 [2024-11-09 17:16:44.293625] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.775 [2024-11-09 17:16:44.293727] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.775 [2024-11-09 17:16:44.293928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.775 [2024-11-09 17:16:44.293953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.775 [2024-11-09 17:16:44.294006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.775 [2024-11-09 17:16:44.294020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.775 [2024-11-09 17:16:44.294072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000080 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.775 [2024-11-09 17:16:44.294087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.775 [2024-11-09 17:16:44.294139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.775 [2024-11-09 17:16:44.294153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.775 #36 NEW cov: 11816 ft: 14450 corp: 26/659b lim: 35 exec/s: 36 rss: 69Mb L: 30/31 MS: 1 ChangeBit- 00:07:24.775 [2024-11-09 17:16:44.333519] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.775 [2024-11-09 17:16:44.333653] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.775 [2024-11-09 17:16:44.334042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.775 [2024-11-09 17:16:44.334068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.775 [2024-11-09 17:16:44.334122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.775 [2024-11-09 17:16:44.334138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.775 [2024-11-09 17:16:44.334191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.775 [2024-11-09 17:16:44.334205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.775 [2024-11-09 17:16:44.334257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.775 [2024-11-09 17:16:44.334271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.775 #37 NEW cov: 11816 ft: 14465 corp: 27/693b lim: 35 exec/s: 37 rss: 70Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:07:24.775 [2024-11-09 17:16:44.373642] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.775 [2024-11-09 17:16:44.373784] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.775 [2024-11-09 17:16:44.373980] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.775 [2024-11-09 17:16:44.374184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.775 [2024-11-09 17:16:44.374210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.775 [2024-11-09 17:16:44.374262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.775 [2024-11-09 17:16:44.374277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.775 [2024-11-09 17:16:44.374326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:491e004e cdw11:00000200 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.775 [2024-11-09 17:16:44.374339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.775 [2024-11-09 17:16:44.374391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.775 [2024-11-09 17:16:44.374404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.775 #38 NEW cov: 11816 ft: 14471 corp: 28/723b lim: 35 exec/s: 38 rss: 70Mb L: 30/34 MS: 1 ShuffleBytes- 00:07:24.775 [2024-11-09 17:16:44.413751] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.775 [2024-11-09 17:16:44.413879] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.775 [2024-11-09 17:16:44.413983] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.775 [2024-11-09 17:16:44.414195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.775 [2024-11-09 17:16:44.414221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.775 [2024-11-09 17:16:44.414276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.775 [2024-11-09 17:16:44.414292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.775 [2024-11-09 17:16:44.414346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000020 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.775 [2024-11-09 17:16:44.414360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.775 #39 NEW cov: 11816 ft: 14505 corp: 29/750b lim: 35 exec/s: 39 rss: 70Mb L: 27/34 MS: 1 ChangeByte- 00:07:24.775 [2024-11-09 17:16:44.453879] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.775 [2024-11-09 17:16:44.453992] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.775 [2024-11-09 17:16:44.454099] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.775 [2024-11-09 17:16:44.454203] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.775 [2024-11-09 17:16:44.454412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.775 [2024-11-09 17:16:44.454439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.775 [2024-11-09 17:16:44.454498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.775 [2024-11-09 17:16:44.454514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.775 [2024-11-09 17:16:44.454570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.775 [2024-11-09 17:16:44.454585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.775 [2024-11-09 17:16:44.454638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.775 [2024-11-09 17:16:44.454652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.775 #40 NEW cov: 11816 ft: 14512 corp: 30/784b lim: 35 exec/s: 40 rss: 70Mb L: 34/34 MS: 1 CrossOver- 00:07:24.775 [2024-11-09 17:16:44.484018] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.775 [2024-11-09 17:16:44.484129] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.775 [2024-11-09 17:16:44.484339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000036 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.775 [2024-11-09 17:16:44.484363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.775 [2024-11-09 17:16:44.484418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.775 [2024-11-09 17:16:44.484433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.775 [2024-11-09 17:16:44.484484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000020 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.775 [2024-11-09 17:16:44.484500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.775 #41 NEW cov: 11816 ft: 14518 corp: 31/811b lim: 35 exec/s: 41 rss: 70Mb L: 27/34 MS: 1 ChangeByte- 00:07:24.775 [2024-11-09 17:16:44.524105] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.775 [2024-11-09 17:16:44.524221] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.775 [2024-11-09 17:16:44.524327] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.775 [2024-11-09 17:16:44.524433] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:24.775 [2024-11-09 17:16:44.524739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.775 [2024-11-09 17:16:44.524765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.775 [2024-11-09 17:16:44.524817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.775 [2024-11-09 17:16:44.524831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.776 [2024-11-09 17:16:44.524881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.776 [2024-11-09 17:16:44.524896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.776 [2024-11-09 17:16:44.524950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:f6000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.776 [2024-11-09 17:16:44.524965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.776 [2024-11-09 17:16:44.525014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:f60000f6 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.776 [2024-11-09 17:16:44.525027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.036 #42 NEW cov: 11816 ft: 14637 corp: 32/846b lim: 35 exec/s: 42 rss: 70Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:25.036 [2024-11-09 17:16:44.564316] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.036 [2024-11-09 17:16:44.564429] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.036 [2024-11-09 17:16:44.564537] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.036 [2024-11-09 17:16:44.564737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000036 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.036 [2024-11-09 17:16:44.564763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.036 [2024-11-09 17:16:44.564816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:84000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.036 [2024-11-09 17:16:44.564830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.036 [2024-11-09 17:16:44.564881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:20000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.036 [2024-11-09 17:16:44.564895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.036 [2024-11-09 17:16:44.564946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.036 [2024-11-09 17:16:44.564961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.036 #43 NEW cov: 11816 ft: 14649 corp: 33/874b lim: 35 exec/s: 43 rss: 70Mb L: 28/35 MS: 1 InsertByte- 00:07:25.036 [2024-11-09 17:16:44.604303] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.036 [2024-11-09 17:16:44.604413] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.036 [2024-11-09 17:16:44.604517] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.036 [2024-11-09 17:16:44.604721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.036 [2024-11-09 17:16:44.604748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.036 [2024-11-09 17:16:44.604803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000cf00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.036 [2024-11-09 17:16:44.604817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.036 [2024-11-09 17:16:44.604872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:0a4e0000 cdw11:0200491e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.036 [2024-11-09 17:16:44.604887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.036 #44 NEW cov: 11816 ft: 14651 corp: 34/898b lim: 35 exec/s: 44 rss: 70Mb L: 24/35 MS: 1 PersAutoDict- DE: "NI\036\002\000\000\000\000"- 00:07:25.036 [2024-11-09 17:16:44.644375] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.036 [2024-11-09 17:16:44.644504] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.036 [2024-11-09 17:16:44.644723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.036 [2024-11-09 17:16:44.644750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.036 [2024-11-09 17:16:44.644802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.036 [2024-11-09 17:16:44.644817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.036 #45 NEW cov: 11816 ft: 14676 corp: 35/914b lim: 35 exec/s: 45 rss: 70Mb L: 16/35 MS: 1 InsertByte- 00:07:25.036 [2024-11-09 17:16:44.684567] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.036 [2024-11-09 17:16:44.684979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.036 [2024-11-09 17:16:44.685005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.036 [2024-11-09 17:16:44.685057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:0000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.036 [2024-11-09 17:16:44.685071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.036 [2024-11-09 17:16:44.685125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.036 [2024-11-09 17:16:44.685138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.036 #46 NEW cov: 11816 ft: 14692 corp: 36/940b lim: 35 exec/s: 46 rss: 70Mb L: 26/35 MS: 1 CopyPart- 00:07:25.036 [2024-11-09 17:16:44.724614] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.036 [2024-11-09 17:16:44.724963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00fd0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.036 [2024-11-09 17:16:44.724988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.036 [2024-11-09 17:16:44.725042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:0000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.036 [2024-11-09 17:16:44.725055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.036 #47 NEW cov: 11816 ft: 14706 corp: 37/957b lim: 35 exec/s: 47 rss: 70Mb L: 17/35 MS: 1 EraseBytes- 00:07:25.036 [2024-11-09 17:16:44.764898] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.036 [2024-11-09 17:16:44.765013] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.036 [2024-11-09 17:16:44.765120] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.036 [2024-11-09 17:16:44.765224] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.036 [2024-11-09 17:16:44.765513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.036 [2024-11-09 17:16:44.765546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.036 [2024-11-09 17:16:44.765600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.036 [2024-11-09 17:16:44.765614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.036 [2024-11-09 17:16:44.765666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.036 [2024-11-09 17:16:44.765681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.036 [2024-11-09 17:16:44.765733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:f6000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.036 [2024-11-09 17:16:44.765747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.036 [2024-11-09 17:16:44.765799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:f60000f6 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.036 [2024-11-09 17:16:44.765812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.036 #48 NEW cov: 11816 ft: 14720 corp: 38/992b lim: 35 exec/s: 48 rss: 70Mb L: 35/35 MS: 1 CopyPart- 00:07:25.296 [2024-11-09 17:16:44.805155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f3f3000a cdw11:f300f3f3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.296 [2024-11-09 17:16:44.805180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.296 [2024-11-09 17:16:44.805232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f34100cb cdw11:f300f3f3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.296 [2024-11-09 17:16:44.805246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.296 #49 NEW cov: 11816 ft: 14733 corp: 39/1010b lim: 35 exec/s: 49 rss: 70Mb L: 18/35 MS: 1 InsertByte- 00:07:25.296 [2024-11-09 17:16:44.845065] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.296 [2024-11-09 17:16:44.845180] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.296 [2024-11-09 17:16:44.845575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.296 [2024-11-09 17:16:44.845603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.296 [2024-11-09 17:16:44.845659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:cf000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.297 [2024-11-09 17:16:44.845675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.297 [2024-11-09 17:16:44.845727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.297 [2024-11-09 17:16:44.845741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.297 [2024-11-09 17:16:44.845794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ff0000ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.297 [2024-11-09 17:16:44.845806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.297 #50 NEW cov: 11816 ft: 14738 corp: 40/1038b lim: 35 exec/s: 50 rss: 70Mb L: 28/35 MS: 1 CopyPart- 00:07:25.297 [2024-11-09 17:16:44.885117] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.297 [2024-11-09 17:16:44.885529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:001a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.297 [2024-11-09 17:16:44.885556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.297 [2024-11-09 17:16:44.885609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00cfff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.297 [2024-11-09 17:16:44.885623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.297 [2024-11-09 17:16:44.885675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.297 [2024-11-09 17:16:44.885687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.297 #51 NEW cov: 11816 ft: 14749 corp: 41/1064b lim: 35 exec/s: 51 rss: 70Mb L: 26/35 MS: 1 ChangeBinInt- 00:07:25.297 [2024-11-09 17:16:44.925257] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.297 [2024-11-09 17:16:44.925373] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.297 [2024-11-09 17:16:44.925762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:1c000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.297 [2024-11-09 17:16:44.925789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.297 [2024-11-09 17:16:44.925843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.297 [2024-11-09 17:16:44.925857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.297 [2024-11-09 17:16:44.925908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.297 [2024-11-09 17:16:44.925922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.297 [2024-11-09 17:16:44.925975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ff0000ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.297 [2024-11-09 17:16:44.925988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.297 #52 NEW cov: 11816 ft: 14754 corp: 42/1092b lim: 35 exec/s: 52 rss: 70Mb L: 28/35 MS: 1 ChangeBinInt- 00:07:25.297 [2024-11-09 17:16:44.965351] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.297 [2024-11-09 17:16:44.965482] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.297 [2024-11-09 17:16:44.965597] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.297 [2024-11-09 17:16:44.965809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.297 [2024-11-09 17:16:44.965835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.297 [2024-11-09 17:16:44.965889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.297 [2024-11-09 17:16:44.965905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.297 [2024-11-09 17:16:44.965961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:20000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.297 [2024-11-09 17:16:44.965975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.297 #53 NEW cov: 11816 ft: 14765 corp: 43/1116b lim: 35 exec/s: 53 rss: 70Mb L: 24/35 MS: 1 EraseBytes- 00:07:25.297 [2024-11-09 17:16:45.005499] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.297 [2024-11-09 17:16:45.005637] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.297 [2024-11-09 17:16:45.005772] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.297 [2024-11-09 17:16:45.005877] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.297 [2024-11-09 17:16:45.006086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.297 [2024-11-09 17:16:45.006114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.297 [2024-11-09 17:16:45.006168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.297 [2024-11-09 17:16:45.006184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.297 [2024-11-09 17:16:45.006237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.297 [2024-11-09 17:16:45.006253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.297 [2024-11-09 17:16:45.006304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.297 [2024-11-09 17:16:45.006320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.297 #54 NEW cov: 11816 ft: 14773 corp: 44/1145b lim: 35 exec/s: 54 rss: 70Mb L: 29/35 MS: 1 ShuffleBytes- 00:07:25.297 [2024-11-09 17:16:45.045580] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.297 [2024-11-09 17:16:45.045718] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.297 [2024-11-09 17:16:45.045824] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.297 [2024-11-09 17:16:45.046034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000041 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.297 [2024-11-09 17:16:45.046061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.297 [2024-11-09 17:16:45.046113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000000cf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.297 [2024-11-09 17:16:45.046127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.297 [2024-11-09 17:16:45.046178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:000a0000 cdw11:1e004e49 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.297 [2024-11-09 17:16:45.046193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.557 #55 NEW cov: 11816 ft: 14799 corp: 45/1170b lim: 35 exec/s: 27 rss: 70Mb L: 25/35 MS: 1 InsertByte- 00:07:25.557 #55 DONE cov: 11816 ft: 14799 corp: 45/1170b lim: 35 exec/s: 27 rss: 70Mb 00:07:25.557 ###### Recommended dictionary. ###### 00:07:25.557 "NI\036\002\000\000\000\000" # Uses: 1 00:07:25.557 ###### End of recommended dictionary. ###### 00:07:25.557 Done 55 runs in 2 second(s) 00:07:25.557 17:16:45 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_2.conf 00:07:25.557 17:16:45 -- ../common.sh@72 -- # (( i++ )) 00:07:25.557 17:16:45 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:25.557 17:16:45 -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:25.557 17:16:45 -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:07:25.557 17:16:45 -- nvmf/run.sh@24 -- # local timen=1 00:07:25.557 17:16:45 -- nvmf/run.sh@25 -- # local core=0x1 00:07:25.557 17:16:45 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:25.557 17:16:45 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:07:25.557 17:16:45 -- nvmf/run.sh@29 -- # printf %02d 3 00:07:25.557 17:16:45 -- nvmf/run.sh@29 -- # port=4403 00:07:25.557 17:16:45 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:25.557 17:16:45 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:07:25.557 17:16:45 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:25.557 17:16:45 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 -r /var/tmp/spdk3.sock 00:07:25.557 [2024-11-09 17:16:45.238549] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:25.557 [2024-11-09 17:16:45.238620] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid366953 ] 00:07:25.557 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.816 [2024-11-09 17:16:45.432312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.816 [2024-11-09 17:16:45.497219] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:25.816 [2024-11-09 17:16:45.497348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.816 [2024-11-09 17:16:45.555345] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.816 [2024-11-09 17:16:45.571713] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:07:26.075 INFO: Running with entropic power schedule (0xFF, 100). 00:07:26.075 INFO: Seed: 3627019970 00:07:26.075 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:26.075 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:26.075 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:26.075 INFO: A corpus is not provided, starting from an empty corpus 00:07:26.075 #2 INITED exec/s: 0 rss: 60Mb 00:07:26.075 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:26.075 This may also happen if the target rejected all inputs we tried so far 00:07:26.075 [2024-11-09 17:16:45.638400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.075 [2024-11-09 17:16:45.638440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.335 NEW_FUNC[1/678]: 0x43f858 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:07:26.335 NEW_FUNC[2/678]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:26.335 #3 NEW cov: 11783 ft: 11784 corp: 2/9b lim: 20 exec/s: 0 rss: 67Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:07:26.335 [2024-11-09 17:16:45.968937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.335 [2024-11-09 17:16:45.968987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.335 #4 NEW cov: 11898 ft: 12506 corp: 3/17b lim: 20 exec/s: 0 rss: 67Mb L: 8/8 MS: 1 ChangeByte- 00:07:26.335 [2024-11-09 17:16:46.018918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.335 [2024-11-09 17:16:46.018949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.335 #5 NEW cov: 11904 ft: 12799 corp: 4/25b lim: 20 exec/s: 0 rss: 67Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:26.335 [2024-11-09 17:16:46.058876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.335 [2024-11-09 17:16:46.058903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.335 #6 NEW cov: 11995 ft: 13180 corp: 5/34b lim: 20 exec/s: 0 rss: 67Mb L: 9/9 MS: 1 CrossOver- 00:07:26.335 [2024-11-09 17:16:46.099259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.335 [2024-11-09 17:16:46.099286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.594 #7 NEW cov: 11995 ft: 13309 corp: 6/43b lim: 20 exec/s: 0 rss: 67Mb L: 9/9 MS: 1 InsertByte- 00:07:26.594 [2024-11-09 17:16:46.139341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.594 [2024-11-09 17:16:46.139372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.594 #8 NEW cov: 11995 ft: 13368 corp: 7/51b lim: 20 exec/s: 0 rss: 67Mb L: 8/9 MS: 1 ChangeByte- 00:07:26.594 [2024-11-09 17:16:46.179376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.594 [2024-11-09 17:16:46.179407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.594 #9 NEW cov: 11995 ft: 13404 corp: 8/60b lim: 20 exec/s: 0 rss: 67Mb L: 9/9 MS: 1 ChangeBit- 00:07:26.594 [2024-11-09 17:16:46.229842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.594 [2024-11-09 17:16:46.229870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.594 #10 NEW cov: 12012 ft: 13778 corp: 9/77b lim: 20 exec/s: 0 rss: 68Mb L: 17/17 MS: 1 InsertRepeatedBytes- 00:07:26.594 [2024-11-09 17:16:46.279914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.594 [2024-11-09 17:16:46.279945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.594 #11 NEW cov: 12016 ft: 13909 corp: 10/91b lim: 20 exec/s: 0 rss: 68Mb L: 14/17 MS: 1 CrossOver- 00:07:26.594 [2024-11-09 17:16:46.320078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.594 [2024-11-09 17:16:46.320109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.594 #12 NEW cov: 12016 ft: 13934 corp: 11/108b lim: 20 exec/s: 0 rss: 68Mb L: 17/17 MS: 1 ChangeBit- 00:07:26.853 [2024-11-09 17:16:46.370253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.853 [2024-11-09 17:16:46.370283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.854 #13 NEW cov: 12016 ft: 13994 corp: 12/125b lim: 20 exec/s: 0 rss: 68Mb L: 17/17 MS: 1 ChangeBinInt- 00:07:26.854 [2024-11-09 17:16:46.409825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.854 [2024-11-09 17:16:46.409855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.854 #14 NEW cov: 12016 ft: 14280 corp: 13/131b lim: 20 exec/s: 0 rss: 68Mb L: 6/17 MS: 1 EraseBytes- 00:07:26.854 #15 NEW cov: 12016 ft: 14371 corp: 14/139b lim: 20 exec/s: 0 rss: 68Mb L: 8/17 MS: 1 ChangeByte- 00:07:26.854 [2024-11-09 17:16:46.490570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.854 [2024-11-09 17:16:46.490599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.854 NEW_FUNC[1/2]: 0x13236d8 in _nvmf_tcp_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:3438 00:07:26.854 NEW_FUNC[2/2]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:26.854 #16 NEW cov: 12066 ft: 14530 corp: 15/152b lim: 20 exec/s: 0 rss: 68Mb L: 13/17 MS: 1 CopyPart- 00:07:26.854 #17 NEW cov: 12066 ft: 14628 corp: 16/158b lim: 20 exec/s: 0 rss: 68Mb L: 6/17 MS: 1 CrossOver- 00:07:26.854 [2024-11-09 17:16:46.570891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.854 [2024-11-09 17:16:46.570919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.854 #18 NEW cov: 12066 ft: 14650 corp: 17/175b lim: 20 exec/s: 0 rss: 68Mb L: 17/17 MS: 1 ChangeBinInt- 00:07:27.112 #19 NEW cov: 12066 ft: 14709 corp: 18/183b lim: 20 exec/s: 19 rss: 68Mb L: 8/17 MS: 1 ChangeBinInt- 00:07:27.112 #20 NEW cov: 12066 ft: 14751 corp: 19/200b lim: 20 exec/s: 20 rss: 68Mb L: 17/17 MS: 1 CopyPart- 00:07:27.112 #21 NEW cov: 12066 ft: 14760 corp: 20/217b lim: 20 exec/s: 21 rss: 68Mb L: 17/17 MS: 1 ChangeBit- 00:07:27.112 [2024-11-09 17:16:46.751426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:27.113 [2024-11-09 17:16:46.751457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.113 #22 NEW cov: 12066 ft: 14762 corp: 21/234b lim: 20 exec/s: 22 rss: 68Mb L: 17/17 MS: 1 CMP- DE: "\204q\250\004\317\277\204\000"- 00:07:27.113 #23 NEW cov: 12067 ft: 14786 corp: 22/248b lim: 20 exec/s: 23 rss: 68Mb L: 14/17 MS: 1 CMP- DE: "\001\000\000\000\002\036IM"- 00:07:27.113 #24 NEW cov: 12067 ft: 14789 corp: 23/262b lim: 20 exec/s: 24 rss: 68Mb L: 14/17 MS: 1 ChangeByte- 00:07:27.371 #25 NEW cov: 12067 ft: 14823 corp: 24/274b lim: 20 exec/s: 25 rss: 68Mb L: 12/17 MS: 1 InsertRepeatedBytes- 00:07:27.372 #26 NEW cov: 12067 ft: 14880 corp: 25/280b lim: 20 exec/s: 26 rss: 68Mb L: 6/17 MS: 1 ChangeBinInt- 00:07:27.372 #27 NEW cov: 12067 ft: 14901 corp: 26/286b lim: 20 exec/s: 27 rss: 68Mb L: 6/17 MS: 1 ShuffleBytes- 00:07:27.372 #28 NEW cov: 12067 ft: 14931 corp: 27/303b lim: 20 exec/s: 28 rss: 69Mb L: 17/17 MS: 1 ShuffleBytes- 00:07:27.372 #29 NEW cov: 12067 ft: 14937 corp: 28/314b lim: 20 exec/s: 29 rss: 69Mb L: 11/17 MS: 1 CopyPart- 00:07:27.372 #30 NEW cov: 12067 ft: 14943 corp: 29/331b lim: 20 exec/s: 30 rss: 69Mb L: 17/17 MS: 1 ChangeByte- 00:07:27.631 #31 NEW cov: 12067 ft: 14949 corp: 30/337b lim: 20 exec/s: 31 rss: 69Mb L: 6/17 MS: 1 ChangeBinInt- 00:07:27.631 #35 NEW cov: 12067 ft: 14976 corp: 31/347b lim: 20 exec/s: 35 rss: 69Mb L: 10/17 MS: 4 ChangeByte-CopyPart-ChangeByte-PersAutoDict- DE: "\001\000\000\000\002\036IM"- 00:07:27.631 #36 NEW cov: 12067 ft: 15059 corp: 32/367b lim: 20 exec/s: 36 rss: 69Mb L: 20/20 MS: 1 CopyPart- 00:07:27.631 #37 NEW cov: 12067 ft: 15065 corp: 33/373b lim: 20 exec/s: 37 rss: 69Mb L: 6/20 MS: 1 ChangeBinInt- 00:07:27.631 [2024-11-09 17:16:47.302573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:27.631 [2024-11-09 17:16:47.302607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.631 #38 NEW cov: 12067 ft: 15075 corp: 34/378b lim: 20 exec/s: 38 rss: 69Mb L: 5/20 MS: 1 EraseBytes- 00:07:27.631 #39 NEW cov: 12067 ft: 15097 corp: 35/385b lim: 20 exec/s: 39 rss: 69Mb L: 7/20 MS: 1 InsertByte- 00:07:27.890 #40 NEW cov: 12067 ft: 15104 corp: 36/398b lim: 20 exec/s: 40 rss: 69Mb L: 13/20 MS: 1 EraseBytes- 00:07:27.890 #41 NEW cov: 12067 ft: 15122 corp: 37/407b lim: 20 exec/s: 41 rss: 69Mb L: 9/20 MS: 1 InsertByte- 00:07:27.890 #43 NEW cov: 12067 ft: 15128 corp: 38/414b lim: 20 exec/s: 43 rss: 69Mb L: 7/20 MS: 2 ChangeByte-CrossOver- 00:07:27.890 #44 NEW cov: 12067 ft: 15197 corp: 39/434b lim: 20 exec/s: 44 rss: 69Mb L: 20/20 MS: 1 CopyPart- 00:07:27.890 #45 NEW cov: 12067 ft: 15207 corp: 40/442b lim: 20 exec/s: 45 rss: 69Mb L: 8/20 MS: 1 ChangeBit- 00:07:27.890 [2024-11-09 17:16:47.593477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:27.890 [2024-11-09 17:16:47.593506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.890 #46 NEW cov: 12067 ft: 15211 corp: 41/448b lim: 20 exec/s: 23 rss: 69Mb L: 6/20 MS: 1 ChangeBinInt- 00:07:27.890 #46 DONE cov: 12067 ft: 15211 corp: 41/448b lim: 20 exec/s: 23 rss: 69Mb 00:07:27.890 ###### Recommended dictionary. ###### 00:07:27.890 "\204q\250\004\317\277\204\000" # Uses: 0 00:07:27.890 "\001\000\000\000\002\036IM" # Uses: 1 00:07:27.890 ###### End of recommended dictionary. ###### 00:07:27.890 Done 46 runs in 2 second(s) 00:07:28.150 17:16:47 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_3.conf 00:07:28.150 17:16:47 -- ../common.sh@72 -- # (( i++ )) 00:07:28.150 17:16:47 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:28.150 17:16:47 -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:28.150 17:16:47 -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:07:28.150 17:16:47 -- nvmf/run.sh@24 -- # local timen=1 00:07:28.150 17:16:47 -- nvmf/run.sh@25 -- # local core=0x1 00:07:28.150 17:16:47 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:28.150 17:16:47 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:07:28.150 17:16:47 -- nvmf/run.sh@29 -- # printf %02d 4 00:07:28.150 17:16:47 -- nvmf/run.sh@29 -- # port=4404 00:07:28.150 17:16:47 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:28.150 17:16:47 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:07:28.150 17:16:47 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:28.150 17:16:47 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 -r /var/tmp/spdk4.sock 00:07:28.150 [2024-11-09 17:16:47.780498] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:28.150 [2024-11-09 17:16:47.780585] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid367485 ] 00:07:28.150 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.410 [2024-11-09 17:16:47.971111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.410 [2024-11-09 17:16:48.034894] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:28.410 [2024-11-09 17:16:48.035021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.410 [2024-11-09 17:16:48.092772] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:28.410 [2024-11-09 17:16:48.109090] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:07:28.410 INFO: Running with entropic power schedule (0xFF, 100). 00:07:28.410 INFO: Seed: 1870043254 00:07:28.410 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:28.410 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:28.410 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:28.410 INFO: A corpus is not provided, starting from an empty corpus 00:07:28.410 #2 INITED exec/s: 0 rss: 60Mb 00:07:28.410 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:28.410 This may also happen if the target rejected all inputs we tried so far 00:07:28.410 [2024-11-09 17:16:48.164427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6f6f2f6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.410 [2024-11-09 17:16:48.164455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.410 [2024-11-09 17:16:48.164528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6f6f6f6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.410 [2024-11-09 17:16:48.164543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.929 NEW_FUNC[1/671]: 0x440958 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:07:28.929 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:28.929 #14 NEW cov: 11601 ft: 11602 corp: 2/18b lim: 35 exec/s: 0 rss: 68Mb L: 17/17 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:28.929 [2024-11-09 17:16:48.485213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6f6f2f6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.929 [2024-11-09 17:16:48.485245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.929 [2024-11-09 17:16:48.485313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6f6f6f6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.929 [2024-11-09 17:16:48.485327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.929 #15 NEW cov: 11714 ft: 12050 corp: 3/36b lim: 35 exec/s: 0 rss: 68Mb L: 18/18 MS: 1 InsertByte- 00:07:28.929 [2024-11-09 17:16:48.535141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000102 cdw11:02000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.929 [2024-11-09 17:16:48.535166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.929 #23 NEW cov: 11720 ft: 12868 corp: 4/43b lim: 35 exec/s: 0 rss: 68Mb L: 7/18 MS: 3 ShuffleBytes-CMP-CopyPart- DE: "\001\000\002\000"- 00:07:28.929 [2024-11-09 17:16:48.575387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6f6f2f6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.929 [2024-11-09 17:16:48.575413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.929 [2024-11-09 17:16:48.575466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6f6f6f6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.929 [2024-11-09 17:16:48.575479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.929 #24 NEW cov: 11805 ft: 13219 corp: 5/60b lim: 35 exec/s: 0 rss: 68Mb L: 17/18 MS: 1 ShuffleBytes- 00:07:28.929 [2024-11-09 17:16:48.615486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6f6f2f6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.929 [2024-11-09 17:16:48.615511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.929 [2024-11-09 17:16:48.615568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6f6f6f6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.929 [2024-11-09 17:16:48.615585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.929 #25 NEW cov: 11805 ft: 13417 corp: 6/78b lim: 35 exec/s: 0 rss: 68Mb L: 18/18 MS: 1 ChangeByte- 00:07:28.929 [2024-11-09 17:16:48.655590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6f6f2f6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.929 [2024-11-09 17:16:48.655615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.929 [2024-11-09 17:16:48.655683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6f6f6b6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.929 [2024-11-09 17:16:48.655696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.929 #26 NEW cov: 11805 ft: 13515 corp: 7/96b lim: 35 exec/s: 0 rss: 68Mb L: 18/18 MS: 1 ChangeBit- 00:07:28.929 [2024-11-09 17:16:48.695716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6f6f2f6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.929 [2024-11-09 17:16:48.695741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.929 [2024-11-09 17:16:48.695793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6f6f6f6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.929 [2024-11-09 17:16:48.695807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.189 #27 NEW cov: 11805 ft: 13655 corp: 8/114b lim: 35 exec/s: 0 rss: 68Mb L: 18/18 MS: 1 ChangeBit- 00:07:29.189 [2024-11-09 17:16:48.735816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6e6f2f6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.189 [2024-11-09 17:16:48.735841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.189 [2024-11-09 17:16:48.735892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6f6f6f6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.189 [2024-11-09 17:16:48.735905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.189 #28 NEW cov: 11805 ft: 13739 corp: 9/132b lim: 35 exec/s: 0 rss: 68Mb L: 18/18 MS: 1 ShuffleBytes- 00:07:29.189 [2024-11-09 17:16:48.775795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00290102 cdw11:02000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.189 [2024-11-09 17:16:48.775820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.189 #34 NEW cov: 11805 ft: 13801 corp: 10/139b lim: 35 exec/s: 0 rss: 69Mb L: 7/18 MS: 1 ChangeByte- 00:07:29.189 [2024-11-09 17:16:48.816059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6f6f2f6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.189 [2024-11-09 17:16:48.816084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.189 [2024-11-09 17:16:48.816136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00006f6f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.189 [2024-11-09 17:16:48.816149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.189 #35 NEW cov: 11805 ft: 13825 corp: 11/157b lim: 35 exec/s: 0 rss: 69Mb L: 18/18 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:07:29.189 [2024-11-09 17:16:48.856149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6f6f2f6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.189 [2024-11-09 17:16:48.856174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.189 [2024-11-09 17:16:48.856228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00006f6f cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.189 [2024-11-09 17:16:48.856241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.189 #36 NEW cov: 11805 ft: 13848 corp: 12/175b lim: 35 exec/s: 0 rss: 69Mb L: 18/18 MS: 1 CopyPart- 00:07:29.189 [2024-11-09 17:16:48.896444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6f6f2f6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.189 [2024-11-09 17:16:48.896470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.189 [2024-11-09 17:16:48.896527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6f6f6f6f cdw11:ff830001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.189 [2024-11-09 17:16:48.896541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.189 [2024-11-09 17:16:48.896608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:cbc5d044 cdw11:686f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.189 [2024-11-09 17:16:48.896621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.189 #37 NEW cov: 11805 ft: 14107 corp: 13/200b lim: 35 exec/s: 0 rss: 69Mb L: 25/25 MS: 1 CMP- DE: "\377\203\277\320D\313\305h"- 00:07:29.189 [2024-11-09 17:16:48.936266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0100250a cdw11:02000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.189 [2024-11-09 17:16:48.936290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.189 #41 NEW cov: 11805 ft: 14154 corp: 14/207b lim: 35 exec/s: 0 rss: 69Mb L: 7/25 MS: 4 InsertByte-CopyPart-PersAutoDict-InsertByte- DE: "\001\000\002\000"- 00:07:29.449 [2024-11-09 17:16:48.976506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6fa72f6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.449 [2024-11-09 17:16:48.976533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.449 [2024-11-09 17:16:48.976585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6f6f6f6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.449 [2024-11-09 17:16:48.976598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.449 #42 NEW cov: 11805 ft: 14167 corp: 15/225b lim: 35 exec/s: 0 rss: 69Mb L: 18/25 MS: 1 InsertByte- 00:07:29.449 [2024-11-09 17:16:49.006426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000102 cdw11:02200000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.449 [2024-11-09 17:16:49.006451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.449 #43 NEW cov: 11805 ft: 14225 corp: 16/232b lim: 35 exec/s: 0 rss: 69Mb L: 7/25 MS: 1 ChangeBit- 00:07:29.449 [2024-11-09 17:16:49.046551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:01020102 cdw11:00290000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.449 [2024-11-09 17:16:49.046575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.449 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:29.449 #44 NEW cov: 11828 ft: 14264 corp: 17/239b lim: 35 exec/s: 0 rss: 69Mb L: 7/25 MS: 1 CopyPart- 00:07:29.449 [2024-11-09 17:16:49.086681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:02000100 cdw11:01020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.449 [2024-11-09 17:16:49.086708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.449 #45 NEW cov: 11828 ft: 14323 corp: 18/250b lim: 35 exec/s: 0 rss: 69Mb L: 11/25 MS: 1 PersAutoDict- DE: "\001\000\002\000"- 00:07:29.449 [2024-11-09 17:16:49.126799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:25005c00 cdw11:01020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.449 [2024-11-09 17:16:49.126823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.449 #46 NEW cov: 11828 ft: 14340 corp: 19/257b lim: 35 exec/s: 46 rss: 69Mb L: 7/25 MS: 1 ShuffleBytes- 00:07:29.449 [2024-11-09 17:16:49.167064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6f6f2f6f cdw11:6f2f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.449 [2024-11-09 17:16:49.167088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.449 [2024-11-09 17:16:49.167139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6f6f6f6f cdw11:6e6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.449 [2024-11-09 17:16:49.167152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.449 #47 NEW cov: 11828 ft: 14369 corp: 20/274b lim: 35 exec/s: 47 rss: 69Mb L: 17/25 MS: 1 CrossOver- 00:07:29.449 [2024-11-09 17:16:49.197297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6f6f2f6f cdw11:2f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.449 [2024-11-09 17:16:49.197321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.449 [2024-11-09 17:16:49.197373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6f2f6f6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.449 [2024-11-09 17:16:49.197386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.449 [2024-11-09 17:16:49.197436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:6e6f6f6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.449 [2024-11-09 17:16:49.197449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.709 #48 NEW cov: 11828 ft: 14425 corp: 21/296b lim: 35 exec/s: 48 rss: 69Mb L: 22/25 MS: 1 CopyPart- 00:07:29.709 [2024-11-09 17:16:49.237256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6f6f2f6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.709 [2024-11-09 17:16:49.237281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.709 [2024-11-09 17:16:49.237332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6f6f6d6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.709 [2024-11-09 17:16:49.237345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.709 #49 NEW cov: 11828 ft: 14438 corp: 22/314b lim: 35 exec/s: 49 rss: 69Mb L: 18/25 MS: 1 ChangeBit- 00:07:29.709 [2024-11-09 17:16:49.277396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6f6f0a2f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.709 [2024-11-09 17:16:49.277421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.709 [2024-11-09 17:16:49.277470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6f6f6e6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.709 [2024-11-09 17:16:49.277486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.709 #50 NEW cov: 11828 ft: 14456 corp: 23/333b lim: 35 exec/s: 50 rss: 69Mb L: 19/25 MS: 1 CrossOver- 00:07:29.709 [2024-11-09 17:16:49.307330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0188250a cdw11:02000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.709 [2024-11-09 17:16:49.307354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.709 #51 NEW cov: 11828 ft: 14486 corp: 24/340b lim: 35 exec/s: 51 rss: 69Mb L: 7/25 MS: 1 ChangeByte- 00:07:29.709 [2024-11-09 17:16:49.347466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.709 [2024-11-09 17:16:49.347490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.709 #52 NEW cov: 11828 ft: 14531 corp: 25/347b lim: 35 exec/s: 52 rss: 69Mb L: 7/25 MS: 1 CrossOver- 00:07:29.709 [2024-11-09 17:16:49.387624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000102 cdw11:000a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.709 [2024-11-09 17:16:49.387647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.709 #53 NEW cov: 11828 ft: 14542 corp: 26/354b lim: 35 exec/s: 53 rss: 69Mb L: 7/25 MS: 1 CopyPart- 00:07:29.709 [2024-11-09 17:16:49.428039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6f6f2f6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.709 [2024-11-09 17:16:49.428063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.709 [2024-11-09 17:16:49.428114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00006f6f cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.709 [2024-11-09 17:16:49.428128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.709 [2024-11-09 17:16:49.428178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffff6fff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.709 [2024-11-09 17:16:49.428191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.709 #54 NEW cov: 11828 ft: 14552 corp: 27/379b lim: 35 exec/s: 54 rss: 69Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:07:29.709 [2024-11-09 17:16:49.467948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6f6f2f6f cdw11:6f930001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.709 [2024-11-09 17:16:49.467972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.709 [2024-11-09 17:16:49.468024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6f6f6f6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.709 [2024-11-09 17:16:49.468037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.969 #55 NEW cov: 11828 ft: 14555 corp: 28/396b lim: 35 exec/s: 55 rss: 69Mb L: 17/25 MS: 1 ChangeBinInt- 00:07:29.969 [2024-11-09 17:16:49.507908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:01020001 cdw11:00010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.969 [2024-11-09 17:16:49.507932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.969 #56 NEW cov: 11828 ft: 14572 corp: 29/407b lim: 35 exec/s: 56 rss: 69Mb L: 11/25 MS: 1 ShuffleBytes- 00:07:29.969 [2024-11-09 17:16:49.548334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6f6f2f6f cdw11:6f6f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.969 [2024-11-09 17:16:49.548361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.969 [2024-11-09 17:16:49.548413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:37373737 cdw11:6e6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.969 [2024-11-09 17:16:49.548426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.969 [2024-11-09 17:16:49.548478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:6f6f6f6f cdw11:7a6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.969 [2024-11-09 17:16:49.548491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.969 #57 NEW cov: 11828 ft: 14601 corp: 30/430b lim: 35 exec/s: 57 rss: 70Mb L: 23/25 MS: 1 InsertRepeatedBytes- 00:07:29.969 [2024-11-09 17:16:49.588161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:41000102 cdw11:29020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.969 [2024-11-09 17:16:49.588186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.969 #58 NEW cov: 11828 ft: 14620 corp: 31/438b lim: 35 exec/s: 58 rss: 70Mb L: 8/25 MS: 1 InsertByte- 00:07:29.969 [2024-11-09 17:16:49.628400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6f6f2f6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.969 [2024-11-09 17:16:49.628424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.969 [2024-11-09 17:16:49.628475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6f6f6f6f cdw11:2f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.969 [2024-11-09 17:16:49.628488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.969 #59 NEW cov: 11828 ft: 14622 corp: 32/458b lim: 35 exec/s: 59 rss: 70Mb L: 20/25 MS: 1 CrossOver- 00:07:29.969 [2024-11-09 17:16:49.658699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6f6f2f6f cdw11:c16f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.969 [2024-11-09 17:16:49.658724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.969 [2024-11-09 17:16:49.658792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6f6f6f6f cdw11:ff830001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.969 [2024-11-09 17:16:49.658806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.969 [2024-11-09 17:16:49.658858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:cbc5d044 cdw11:686f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.969 [2024-11-09 17:16:49.658871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.969 #60 NEW cov: 11828 ft: 14624 corp: 33/483b lim: 35 exec/s: 60 rss: 70Mb L: 25/25 MS: 1 ChangeByte- 00:07:29.969 [2024-11-09 17:16:49.698843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6f6f2f6f cdw11:c16f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.969 [2024-11-09 17:16:49.698867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.969 [2024-11-09 17:16:49.698936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6f6f6f6f cdw11:ff830001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.969 [2024-11-09 17:16:49.698949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.969 [2024-11-09 17:16:49.699001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:cbc5d044 cdw11:686f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.969 [2024-11-09 17:16:49.699017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.969 #61 NEW cov: 11828 ft: 14639 corp: 34/508b lim: 35 exec/s: 61 rss: 70Mb L: 25/25 MS: 1 CrossOver- 00:07:30.228 [2024-11-09 17:16:49.738767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6e6f2f6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.228 [2024-11-09 17:16:49.738792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.228 [2024-11-09 17:16:49.738846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6f6f6f6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.228 [2024-11-09 17:16:49.738859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.228 #62 NEW cov: 11828 ft: 14662 corp: 35/526b lim: 35 exec/s: 62 rss: 70Mb L: 18/25 MS: 1 CopyPart- 00:07:30.228 [2024-11-09 17:16:49.778902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.228 [2024-11-09 17:16:49.778927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.228 [2024-11-09 17:16:49.778981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6f6f126f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.228 [2024-11-09 17:16:49.778995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.228 #63 NEW cov: 11828 ft: 14664 corp: 36/544b lim: 35 exec/s: 63 rss: 70Mb L: 18/25 MS: 1 ChangeBinInt- 00:07:30.228 [2024-11-09 17:16:49.808804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6f7a0102 cdw11:6f7a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.228 [2024-11-09 17:16:49.808828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.228 #64 NEW cov: 11828 ft: 14695 corp: 37/551b lim: 35 exec/s: 64 rss: 70Mb L: 7/25 MS: 1 CrossOver- 00:07:30.228 [2024-11-09 17:16:49.848904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:25000102 cdw11:00020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.228 [2024-11-09 17:16:49.848929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.228 #65 NEW cov: 11828 ft: 14719 corp: 38/559b lim: 35 exec/s: 65 rss: 70Mb L: 8/25 MS: 1 InsertByte- 00:07:30.228 [2024-11-09 17:16:49.889266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff5c00 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.228 [2024-11-09 17:16:49.889291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.228 [2024-11-09 17:16:49.889343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.228 [2024-11-09 17:16:49.889357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.228 #66 NEW cov: 11828 ft: 14749 corp: 39/577b lim: 35 exec/s: 66 rss: 70Mb L: 18/25 MS: 1 InsertRepeatedBytes- 00:07:30.228 [2024-11-09 17:16:49.929174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6f2f2f6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.228 [2024-11-09 17:16:49.929199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.228 #67 NEW cov: 11828 ft: 14760 corp: 40/584b lim: 35 exec/s: 67 rss: 70Mb L: 7/25 MS: 1 CrossOver- 00:07:30.228 [2024-11-09 17:16:49.969280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:5cd00503 cdw11:88020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.228 [2024-11-09 17:16:49.969304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.487 #72 NEW cov: 11828 ft: 14772 corp: 41/591b lim: 35 exec/s: 72 rss: 70Mb L: 7/25 MS: 5 EraseBytes-ChangeBinInt-ShuffleBytes-InsertByte-InsertByte- 00:07:30.487 [2024-11-09 17:16:50.009475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6e6f2f6f cdw11:6f6f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.487 [2024-11-09 17:16:50.009501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.487 #73 NEW cov: 11828 ft: 14792 corp: 42/598b lim: 35 exec/s: 73 rss: 70Mb L: 7/25 MS: 1 CrossOver- 00:07:30.487 [2024-11-09 17:16:50.059721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6f6f2f6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.487 [2024-11-09 17:16:50.059749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.487 [2024-11-09 17:16:50.059802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6f126b6f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.487 [2024-11-09 17:16:50.059816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.487 #74 NEW cov: 11828 ft: 14814 corp: 43/616b lim: 35 exec/s: 74 rss: 70Mb L: 18/25 MS: 1 ChangeBinInt- 00:07:30.487 [2024-11-09 17:16:50.100025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6f6f2f6f cdw11:c16f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.487 [2024-11-09 17:16:50.100054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.487 [2024-11-09 17:16:50.100106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6f6f6f6f cdw11:fff00001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.487 [2024-11-09 17:16:50.100120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.487 [2024-11-09 17:16:50.100172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:44cbbfd0 cdw11:c5680002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.487 [2024-11-09 17:16:50.100186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.487 #75 NEW cov: 11828 ft: 14833 corp: 44/642b lim: 35 exec/s: 75 rss: 70Mb L: 26/26 MS: 1 InsertByte- 00:07:30.487 [2024-11-09 17:16:50.150126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6f6f2f6f cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.487 [2024-11-09 17:16:50.150152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.487 [2024-11-09 17:16:50.150206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00006f6f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.487 [2024-11-09 17:16:50.150220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.487 [2024-11-09 17:16:50.150263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:006f0000 cdw11:6f6f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.487 [2024-11-09 17:16:50.150276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.487 #76 NEW cov: 11828 ft: 14841 corp: 45/667b lim: 35 exec/s: 38 rss: 70Mb L: 25/26 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:07:30.487 #76 DONE cov: 11828 ft: 14841 corp: 45/667b lim: 35 exec/s: 38 rss: 70Mb 00:07:30.487 ###### Recommended dictionary. ###### 00:07:30.487 "\001\000\002\000" # Uses: 2 00:07:30.487 "\000\000\000\000\000\000\000\000" # Uses: 1 00:07:30.487 "\377\203\277\320D\313\305h" # Uses: 0 00:07:30.487 ###### End of recommended dictionary. ###### 00:07:30.487 Done 76 runs in 2 second(s) 00:07:30.746 17:16:50 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_4.conf 00:07:30.746 17:16:50 -- ../common.sh@72 -- # (( i++ )) 00:07:30.746 17:16:50 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:30.746 17:16:50 -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:30.746 17:16:50 -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:07:30.746 17:16:50 -- nvmf/run.sh@24 -- # local timen=1 00:07:30.746 17:16:50 -- nvmf/run.sh@25 -- # local core=0x1 00:07:30.746 17:16:50 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:30.746 17:16:50 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:07:30.746 17:16:50 -- nvmf/run.sh@29 -- # printf %02d 5 00:07:30.746 17:16:50 -- nvmf/run.sh@29 -- # port=4405 00:07:30.746 17:16:50 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:30.746 17:16:50 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:07:30.746 17:16:50 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:30.746 17:16:50 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 -r /var/tmp/spdk5.sock 00:07:30.746 [2024-11-09 17:16:50.339309] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:30.746 [2024-11-09 17:16:50.339381] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid368002 ] 00:07:30.746 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.006 [2024-11-09 17:16:50.519225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.006 [2024-11-09 17:16:50.584618] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:31.006 [2024-11-09 17:16:50.584748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.006 [2024-11-09 17:16:50.643047] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.006 [2024-11-09 17:16:50.659402] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:07:31.006 INFO: Running with entropic power schedule (0xFF, 100). 00:07:31.006 INFO: Seed: 125092236 00:07:31.006 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:31.006 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:31.006 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:31.006 INFO: A corpus is not provided, starting from an empty corpus 00:07:31.006 #2 INITED exec/s: 0 rss: 61Mb 00:07:31.006 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:31.006 This may also happen if the target rejected all inputs we tried so far 00:07:31.006 [2024-11-09 17:16:50.736173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.006 [2024-11-09 17:16:50.736209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.006 [2024-11-09 17:16:50.736322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.006 [2024-11-09 17:16:50.736339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.006 [2024-11-09 17:16:50.736451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.006 [2024-11-09 17:16:50.736473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.006 [2024-11-09 17:16:50.736598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.006 [2024-11-09 17:16:50.736614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.265 NEW_FUNC[1/671]: 0x442af8 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:07:31.265 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:31.525 #8 NEW cov: 11612 ft: 11613 corp: 2/38b lim: 45 exec/s: 0 rss: 68Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:07:31.525 [2024-11-09 17:16:51.067098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.525 [2024-11-09 17:16:51.067147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.525 [2024-11-09 17:16:51.067279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.525 [2024-11-09 17:16:51.067301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.525 [2024-11-09 17:16:51.067428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.525 [2024-11-09 17:16:51.067450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.525 [2024-11-09 17:16:51.067582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.525 [2024-11-09 17:16:51.067603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.525 #9 NEW cov: 11725 ft: 12351 corp: 3/75b lim: 45 exec/s: 0 rss: 68Mb L: 37/37 MS: 1 CopyPart- 00:07:31.525 [2024-11-09 17:16:51.116907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:8e8e0a8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.525 [2024-11-09 17:16:51.116935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.525 [2024-11-09 17:16:51.117049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.525 [2024-11-09 17:16:51.117064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.525 [2024-11-09 17:16:51.117173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.526 [2024-11-09 17:16:51.117190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.526 [2024-11-09 17:16:51.117304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.526 [2024-11-09 17:16:51.117319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.526 #10 NEW cov: 11731 ft: 12680 corp: 4/116b lim: 45 exec/s: 0 rss: 68Mb L: 41/41 MS: 1 InsertRepeatedBytes- 00:07:31.526 [2024-11-09 17:16:51.157208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:8e8e0a8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.526 [2024-11-09 17:16:51.157242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.526 [2024-11-09 17:16:51.157366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.526 [2024-11-09 17:16:51.157384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.526 [2024-11-09 17:16:51.157497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.526 [2024-11-09 17:16:51.157516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.526 [2024-11-09 17:16:51.157640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:3b8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.526 [2024-11-09 17:16:51.157657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.526 #11 NEW cov: 11816 ft: 12850 corp: 5/158b lim: 45 exec/s: 0 rss: 68Mb L: 42/42 MS: 1 InsertByte- 00:07:31.526 [2024-11-09 17:16:51.207335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:8e8e0a8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.526 [2024-11-09 17:16:51.207363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.526 [2024-11-09 17:16:51.207474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.526 [2024-11-09 17:16:51.207490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.526 [2024-11-09 17:16:51.207602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.526 [2024-11-09 17:16:51.207617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.526 [2024-11-09 17:16:51.207729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:3b8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.526 [2024-11-09 17:16:51.207745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.526 #12 NEW cov: 11816 ft: 13005 corp: 6/200b lim: 45 exec/s: 0 rss: 68Mb L: 42/42 MS: 1 ChangeByte- 00:07:31.526 [2024-11-09 17:16:51.257357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:8e8e0a8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.526 [2024-11-09 17:16:51.257385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.526 [2024-11-09 17:16:51.257497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.526 [2024-11-09 17:16:51.257514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.526 [2024-11-09 17:16:51.257642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.526 [2024-11-09 17:16:51.257667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.526 [2024-11-09 17:16:51.257783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:3b8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.526 [2024-11-09 17:16:51.257799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.526 #13 NEW cov: 11816 ft: 13032 corp: 7/242b lim: 45 exec/s: 0 rss: 68Mb L: 42/42 MS: 1 ChangeBit- 00:07:31.786 [2024-11-09 17:16:51.307626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:8e8e0a8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.786 [2024-11-09 17:16:51.307655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.786 [2024-11-09 17:16:51.307771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.786 [2024-11-09 17:16:51.307788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.786 [2024-11-09 17:16:51.307897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.786 [2024-11-09 17:16:51.307916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.786 [2024-11-09 17:16:51.308031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:8e8e8e8e cdw11:8e3b0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.786 [2024-11-09 17:16:51.308049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.786 #14 NEW cov: 11816 ft: 13160 corp: 8/284b lim: 45 exec/s: 0 rss: 69Mb L: 42/42 MS: 1 CopyPart- 00:07:31.787 [2024-11-09 17:16:51.357770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:8e8e0a9e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.787 [2024-11-09 17:16:51.357798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.787 [2024-11-09 17:16:51.357915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.787 [2024-11-09 17:16:51.357944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.787 [2024-11-09 17:16:51.358062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.787 [2024-11-09 17:16:51.358078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.787 [2024-11-09 17:16:51.358188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:3b8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.787 [2024-11-09 17:16:51.358205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.787 #15 NEW cov: 11816 ft: 13186 corp: 9/326b lim: 45 exec/s: 0 rss: 69Mb L: 42/42 MS: 1 ChangeBit- 00:07:31.787 [2024-11-09 17:16:51.397607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.787 [2024-11-09 17:16:51.397635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.787 [2024-11-09 17:16:51.397758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.787 [2024-11-09 17:16:51.397774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.787 [2024-11-09 17:16:51.397889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.787 [2024-11-09 17:16:51.397904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.787 #16 NEW cov: 11816 ft: 13558 corp: 10/355b lim: 45 exec/s: 0 rss: 69Mb L: 29/42 MS: 1 EraseBytes- 00:07:31.787 [2024-11-09 17:16:51.437783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:9797d297 cdw11:97970004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.787 [2024-11-09 17:16:51.437812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.787 [2024-11-09 17:16:51.437925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:97979797 cdw11:97970004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.787 [2024-11-09 17:16:51.437944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.787 [2024-11-09 17:16:51.438055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:97979797 cdw11:97970004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.787 [2024-11-09 17:16:51.438070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.787 #19 NEW cov: 11816 ft: 13634 corp: 11/387b lim: 45 exec/s: 0 rss: 69Mb L: 32/42 MS: 3 ChangeBinInt-InsertByte-InsertRepeatedBytes- 00:07:31.787 [2024-11-09 17:16:51.478158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:8e8e0a8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.787 [2024-11-09 17:16:51.478185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.787 [2024-11-09 17:16:51.478303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.787 [2024-11-09 17:16:51.478319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.787 [2024-11-09 17:16:51.478433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.787 [2024-11-09 17:16:51.478449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.787 [2024-11-09 17:16:51.478574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:3b8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.787 [2024-11-09 17:16:51.478590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.787 #20 NEW cov: 11816 ft: 13682 corp: 12/429b lim: 45 exec/s: 0 rss: 69Mb L: 42/42 MS: 1 ShuffleBytes- 00:07:31.787 [2024-11-09 17:16:51.518251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.787 [2024-11-09 17:16:51.518277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.787 [2024-11-09 17:16:51.518392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.787 [2024-11-09 17:16:51.518408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.787 [2024-11-09 17:16:51.518527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.787 [2024-11-09 17:16:51.518543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.787 [2024-11-09 17:16:51.518663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.787 [2024-11-09 17:16:51.518684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.787 #21 NEW cov: 11816 ft: 13717 corp: 13/466b lim: 45 exec/s: 0 rss: 69Mb L: 37/42 MS: 1 CopyPart- 00:07:32.047 [2024-11-09 17:16:51.558451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:8e8e0a8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.047 [2024-11-09 17:16:51.558478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.047 [2024-11-09 17:16:51.558587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.047 [2024-11-09 17:16:51.558605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.047 [2024-11-09 17:16:51.558720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.047 [2024-11-09 17:16:51.558735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.047 [2024-11-09 17:16:51.558857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:8e8e8e8e cdw11:8e3b0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.047 [2024-11-09 17:16:51.558874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.047 #22 NEW cov: 11816 ft: 13738 corp: 14/508b lim: 45 exec/s: 0 rss: 69Mb L: 42/42 MS: 1 CrossOver- 00:07:32.047 [2024-11-09 17:16:51.598344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:8e8e0a8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.047 [2024-11-09 17:16:51.598370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.047 [2024-11-09 17:16:51.598484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.047 [2024-11-09 17:16:51.598502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.047 [2024-11-09 17:16:51.598640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.047 [2024-11-09 17:16:51.598656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.047 [2024-11-09 17:16:51.598774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:3b8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.047 [2024-11-09 17:16:51.598792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.047 [2024-11-09 17:16:51.598912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:8e8effff cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.047 [2024-11-09 17:16:51.598927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:32.047 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:32.047 #23 NEW cov: 11839 ft: 13856 corp: 15/553b lim: 45 exec/s: 0 rss: 69Mb L: 45/45 MS: 1 CrossOver- 00:07:32.047 [2024-11-09 17:16:51.638409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:8e8e0a9e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.047 [2024-11-09 17:16:51.638437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.047 [2024-11-09 17:16:51.638557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.047 [2024-11-09 17:16:51.638577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.047 [2024-11-09 17:16:51.638696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8e868e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.047 [2024-11-09 17:16:51.638712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.047 [2024-11-09 17:16:51.638826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:3b8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.047 [2024-11-09 17:16:51.638843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.047 #24 NEW cov: 11839 ft: 13906 corp: 16/595b lim: 45 exec/s: 0 rss: 69Mb L: 42/45 MS: 1 ChangeBit- 00:07:32.047 [2024-11-09 17:16:51.678860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.047 [2024-11-09 17:16:51.678888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.047 [2024-11-09 17:16:51.679024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.047 [2024-11-09 17:16:51.679040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.047 [2024-11-09 17:16:51.679170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.047 [2024-11-09 17:16:51.679187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.047 [2024-11-09 17:16:51.679301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.047 [2024-11-09 17:16:51.679315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.048 #25 NEW cov: 11839 ft: 13980 corp: 17/634b lim: 45 exec/s: 0 rss: 69Mb L: 39/45 MS: 1 CopyPart- 00:07:32.048 [2024-11-09 17:16:51.718865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ceff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.048 [2024-11-09 17:16:51.718891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.048 [2024-11-09 17:16:51.718999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.048 [2024-11-09 17:16:51.719016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.048 [2024-11-09 17:16:51.719137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.048 [2024-11-09 17:16:51.719155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.048 [2024-11-09 17:16:51.719266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.048 [2024-11-09 17:16:51.719283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.048 #26 NEW cov: 11839 ft: 14007 corp: 18/671b lim: 45 exec/s: 26 rss: 69Mb L: 37/45 MS: 1 ChangeByte- 00:07:32.048 [2024-11-09 17:16:51.758605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:9797d297 cdw11:97970004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.048 [2024-11-09 17:16:51.758637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.048 [2024-11-09 17:16:51.758758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:97979797 cdw11:97970004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.048 [2024-11-09 17:16:51.758776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.048 [2024-11-09 17:16:51.758898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:97ff9797 cdw11:97970004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.048 [2024-11-09 17:16:51.758915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.048 #27 NEW cov: 11839 ft: 14024 corp: 19/703b lim: 45 exec/s: 27 rss: 69Mb L: 32/45 MS: 1 CrossOver- 00:07:32.048 [2024-11-09 17:16:51.799314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.048 [2024-11-09 17:16:51.799341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.048 [2024-11-09 17:16:51.799457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.048 [2024-11-09 17:16:51.799472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.048 [2024-11-09 17:16:51.799595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.048 [2024-11-09 17:16:51.799611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.048 [2024-11-09 17:16:51.799730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:1ccad1f4 cdw11:46ff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.048 [2024-11-09 17:16:51.799747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.048 [2024-11-09 17:16:51.799861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.048 [2024-11-09 17:16:51.799876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:32.307 #33 NEW cov: 11839 ft: 14099 corp: 20/748b lim: 45 exec/s: 33 rss: 69Mb L: 45/45 MS: 1 CMP- DE: "\377\203\277\321\364\034\312F"- 00:07:32.307 [2024-11-09 17:16:51.839212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:8eff0a8e cdw11:83bf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.307 [2024-11-09 17:16:51.839238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.307 [2024-11-09 17:16:51.839352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8e8eca46 cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.307 [2024-11-09 17:16:51.839368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.307 [2024-11-09 17:16:51.839479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.307 [2024-11-09 17:16:51.839497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.307 [2024-11-09 17:16:51.839620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:3b8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.307 [2024-11-09 17:16:51.839638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.307 #34 NEW cov: 11839 ft: 14108 corp: 21/790b lim: 45 exec/s: 34 rss: 69Mb L: 42/45 MS: 1 PersAutoDict- DE: "\377\203\277\321\364\034\312F"- 00:07:32.307 [2024-11-09 17:16:51.879293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:8e1c0a8e cdw11:83bf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.307 [2024-11-09 17:16:51.879320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.307 [2024-11-09 17:16:51.879440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8e8eca46 cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.307 [2024-11-09 17:16:51.879457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.307 [2024-11-09 17:16:51.879572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.307 [2024-11-09 17:16:51.879590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.307 [2024-11-09 17:16:51.879709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:3b8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.307 [2024-11-09 17:16:51.879727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.307 #35 NEW cov: 11839 ft: 14115 corp: 22/832b lim: 45 exec/s: 35 rss: 69Mb L: 42/45 MS: 1 ShuffleBytes- 00:07:32.307 [2024-11-09 17:16:51.919273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:8e8e0a8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.307 [2024-11-09 17:16:51.919301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.307 [2024-11-09 17:16:51.919416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.307 [2024-11-09 17:16:51.919433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.307 [2024-11-09 17:16:51.919574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.307 [2024-11-09 17:16:51.919591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.307 [2024-11-09 17:16:51.919714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:8e8e8e8e cdw11:8e3b0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.307 [2024-11-09 17:16:51.919731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.307 #36 NEW cov: 11839 ft: 14126 corp: 23/874b lim: 45 exec/s: 36 rss: 69Mb L: 42/45 MS: 1 ShuffleBytes- 00:07:32.308 [2024-11-09 17:16:51.959560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:8e8e0a8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.308 [2024-11-09 17:16:51.959587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.308 [2024-11-09 17:16:51.959706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.308 [2024-11-09 17:16:51.959722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.308 [2024-11-09 17:16:51.959839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.308 [2024-11-09 17:16:51.959858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.308 [2024-11-09 17:16:51.959977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:8e8e8e8e cdw11:8e3b0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.308 [2024-11-09 17:16:51.959994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.308 #37 NEW cov: 11839 ft: 14163 corp: 24/917b lim: 45 exec/s: 37 rss: 69Mb L: 43/45 MS: 1 InsertByte- 00:07:32.308 [2024-11-09 17:16:51.999601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:8e8e0a8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.308 [2024-11-09 17:16:51.999627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.308 [2024-11-09 17:16:51.999736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.308 [2024-11-09 17:16:51.999751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.308 [2024-11-09 17:16:51.999861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.308 [2024-11-09 17:16:51.999877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.308 [2024-11-09 17:16:51.999984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:8e8e268e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.308 [2024-11-09 17:16:52.000000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.308 #38 NEW cov: 11839 ft: 14168 corp: 25/959b lim: 45 exec/s: 38 rss: 69Mb L: 42/45 MS: 1 InsertByte- 00:07:32.308 [2024-11-09 17:16:52.039511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:9797d297 cdw11:97970004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.308 [2024-11-09 17:16:52.039541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.308 [2024-11-09 17:16:52.039668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:97979797 cdw11:97970004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.308 [2024-11-09 17:16:52.039684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.308 [2024-11-09 17:16:52.039802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:97979797 cdw11:97970004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.308 [2024-11-09 17:16:52.039820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.308 #39 NEW cov: 11839 ft: 14179 corp: 26/991b lim: 45 exec/s: 39 rss: 69Mb L: 32/45 MS: 1 ShuffleBytes- 00:07:32.567 [2024-11-09 17:16:52.079495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.567 [2024-11-09 17:16:52.079527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.567 [2024-11-09 17:16:52.079637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.567 [2024-11-09 17:16:52.079649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.567 [2024-11-09 17:16:52.079665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.567 [2024-11-09 17:16:52.079677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.568 [2024-11-09 17:16:52.079693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.568 [2024-11-09 17:16:52.079702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.568 #40 NEW cov: 11848 ft: 14242 corp: 27/1031b lim: 45 exec/s: 40 rss: 70Mb L: 40/45 MS: 1 InsertByte- 00:07:32.568 [2024-11-09 17:16:52.120034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:8e8e0a8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.568 [2024-11-09 17:16:52.120059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.568 [2024-11-09 17:16:52.120174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.568 [2024-11-09 17:16:52.120191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.568 [2024-11-09 17:16:52.120304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.568 [2024-11-09 17:16:52.120319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.568 [2024-11-09 17:16:52.120444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:3b8e8e8e cdw11:8eae0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.568 [2024-11-09 17:16:52.120459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.568 #41 NEW cov: 11848 ft: 14278 corp: 28/1073b lim: 45 exec/s: 41 rss: 70Mb L: 42/45 MS: 1 ChangeBit- 00:07:32.568 [2024-11-09 17:16:52.160053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.568 [2024-11-09 17:16:52.160080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.568 [2024-11-09 17:16:52.160204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.568 [2024-11-09 17:16:52.160221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.568 [2024-11-09 17:16:52.160336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.568 [2024-11-09 17:16:52.160352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.568 [2024-11-09 17:16:52.160465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.568 [2024-11-09 17:16:52.160482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.568 #42 NEW cov: 11848 ft: 14289 corp: 29/1110b lim: 45 exec/s: 42 rss: 70Mb L: 37/45 MS: 1 CrossOver- 00:07:32.568 [2024-11-09 17:16:52.200138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff830a8e cdw11:bfd10007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.568 [2024-11-09 17:16:52.200165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.568 [2024-11-09 17:16:52.200284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8e8e468e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.568 [2024-11-09 17:16:52.200302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.568 [2024-11-09 17:16:52.200416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.568 [2024-11-09 17:16:52.200434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.568 [2024-11-09 17:16:52.200562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:3b8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.568 [2024-11-09 17:16:52.200578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.568 [2024-11-09 17:16:52.200693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:8e8effff cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.568 [2024-11-09 17:16:52.200709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:32.568 #43 NEW cov: 11848 ft: 14328 corp: 30/1155b lim: 45 exec/s: 43 rss: 70Mb L: 45/45 MS: 1 PersAutoDict- DE: "\377\203\277\321\364\034\312F"- 00:07:32.568 [2024-11-09 17:16:52.260498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:8e8e0a8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.568 [2024-11-09 17:16:52.260529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.568 [2024-11-09 17:16:52.260648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.568 [2024-11-09 17:16:52.260664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.568 [2024-11-09 17:16:52.260779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.568 [2024-11-09 17:16:52.260796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.568 [2024-11-09 17:16:52.260913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:8e8e8e8e cdw11:8e3b0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.568 [2024-11-09 17:16:52.260929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.568 #44 NEW cov: 11848 ft: 14403 corp: 31/1198b lim: 45 exec/s: 44 rss: 70Mb L: 43/45 MS: 1 ShuffleBytes- 00:07:32.568 [2024-11-09 17:16:52.320676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:8eff0a8e cdw11:83bf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.568 [2024-11-09 17:16:52.320704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.568 [2024-11-09 17:16:52.320830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8e8eca46 cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.569 [2024-11-09 17:16:52.320848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.569 [2024-11-09 17:16:52.320973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.569 [2024-11-09 17:16:52.320988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.569 [2024-11-09 17:16:52.321113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:3b8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.569 [2024-11-09 17:16:52.321129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.829 #45 NEW cov: 11848 ft: 14422 corp: 32/1241b lim: 45 exec/s: 45 rss: 70Mb L: 43/45 MS: 1 InsertByte- 00:07:32.829 [2024-11-09 17:16:52.370818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ceff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.829 [2024-11-09 17:16:52.370847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.829 [2024-11-09 17:16:52.370969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.829 [2024-11-09 17:16:52.370984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.829 [2024-11-09 17:16:52.371102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.829 [2024-11-09 17:16:52.371118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.829 [2024-11-09 17:16:52.371232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.829 [2024-11-09 17:16:52.371249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.829 #46 NEW cov: 11848 ft: 14427 corp: 33/1278b lim: 45 exec/s: 46 rss: 70Mb L: 37/45 MS: 1 ChangeByte- 00:07:32.829 [2024-11-09 17:16:52.431398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.829 [2024-11-09 17:16:52.431429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.829 [2024-11-09 17:16:52.431550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.829 [2024-11-09 17:16:52.431569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.829 [2024-11-09 17:16:52.431687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.829 [2024-11-09 17:16:52.431703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.829 [2024-11-09 17:16:52.431824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:83bfd1ff cdw11:d1f40000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.829 [2024-11-09 17:16:52.431839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.829 [2024-11-09 17:16:52.431956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.829 [2024-11-09 17:16:52.431974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:32.829 #47 NEW cov: 11848 ft: 14459 corp: 34/1323b lim: 45 exec/s: 47 rss: 70Mb L: 45/45 MS: 1 PersAutoDict- DE: "\377\203\277\321\364\034\312F"- 00:07:32.829 [2024-11-09 17:16:52.480672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:5dff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.829 [2024-11-09 17:16:52.480700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.829 [2024-11-09 17:16:52.480827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.829 [2024-11-09 17:16:52.480846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.829 [2024-11-09 17:16:52.480964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.829 [2024-11-09 17:16:52.480979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.829 [2024-11-09 17:16:52.481103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.829 [2024-11-09 17:16:52.481118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.829 #48 NEW cov: 11848 ft: 14464 corp: 35/1361b lim: 45 exec/s: 48 rss: 70Mb L: 38/45 MS: 1 InsertByte- 00:07:32.829 [2024-11-09 17:16:52.520962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:d7d7d7d7 cdw11:d7d70006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.829 [2024-11-09 17:16:52.520988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.829 [2024-11-09 17:16:52.521111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:d7d7d7d7 cdw11:d7d70006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.829 [2024-11-09 17:16:52.521128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.829 [2024-11-09 17:16:52.521245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:d7d7d7d7 cdw11:d7d70006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.829 [2024-11-09 17:16:52.521262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.829 #51 NEW cov: 11848 ft: 14482 corp: 36/1395b lim: 45 exec/s: 51 rss: 70Mb L: 34/45 MS: 3 InsertRepeatedBytes-EraseBytes-InsertRepeatedBytes- 00:07:32.829 [2024-11-09 17:16:52.561309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:8e8e0a9e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.829 [2024-11-09 17:16:52.561334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.829 [2024-11-09 17:16:52.561449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.829 [2024-11-09 17:16:52.561463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.829 [2024-11-09 17:16:52.561585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8e868e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.829 [2024-11-09 17:16:52.561601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.829 [2024-11-09 17:16:52.561718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:3b8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.829 [2024-11-09 17:16:52.561734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.829 #52 NEW cov: 11848 ft: 14484 corp: 37/1437b lim: 45 exec/s: 52 rss: 70Mb L: 42/45 MS: 1 ChangeBinInt- 00:07:33.089 [2024-11-09 17:16:52.611534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff830a8e cdw11:bfd10007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.089 [2024-11-09 17:16:52.611561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.089 [2024-11-09 17:16:52.611693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8e8e468e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.089 [2024-11-09 17:16:52.611714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.089 [2024-11-09 17:16:52.611825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.089 [2024-11-09 17:16:52.611842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.089 [2024-11-09 17:16:52.611961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:8e8e8e3b cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.089 [2024-11-09 17:16:52.611977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.089 #53 NEW cov: 11848 ft: 14549 corp: 38/1481b lim: 45 exec/s: 53 rss: 70Mb L: 44/45 MS: 1 EraseBytes- 00:07:33.089 [2024-11-09 17:16:52.651951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff830a8e cdw11:bfd10007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.089 [2024-11-09 17:16:52.651977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.089 [2024-11-09 17:16:52.652086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8e8e468e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.089 [2024-11-09 17:16:52.652103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.089 [2024-11-09 17:16:52.652214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.089 [2024-11-09 17:16:52.652230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.089 [2024-11-09 17:16:52.652337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:3b8e8e8e cdw11:8e010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.089 [2024-11-09 17:16:52.652352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.089 [2024-11-09 17:16:52.652466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:c7de0220 cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.089 [2024-11-09 17:16:52.652481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:33.089 #54 NEW cov: 11848 ft: 14569 corp: 39/1526b lim: 45 exec/s: 54 rss: 70Mb L: 45/45 MS: 1 CMP- DE: "\001\000\000\000\002 \307\336"- 00:07:33.089 [2024-11-09 17:16:52.692104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:8e920a8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.089 [2024-11-09 17:16:52.692129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.089 [2024-11-09 17:16:52.692239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.089 [2024-11-09 17:16:52.692256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.089 [2024-11-09 17:16:52.692371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.089 [2024-11-09 17:16:52.692387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.089 [2024-11-09 17:16:52.692507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:3b8e8e8e cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.089 [2024-11-09 17:16:52.692529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.089 [2024-11-09 17:16:52.692653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:8e8effff cdw11:8e8e0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.089 [2024-11-09 17:16:52.692669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:33.089 #55 NEW cov: 11848 ft: 14604 corp: 40/1571b lim: 45 exec/s: 27 rss: 70Mb L: 45/45 MS: 1 ChangeBinInt- 00:07:33.089 #55 DONE cov: 11848 ft: 14604 corp: 40/1571b lim: 45 exec/s: 27 rss: 70Mb 00:07:33.090 ###### Recommended dictionary. ###### 00:07:33.090 "\377\203\277\321\364\034\312F" # Uses: 3 00:07:33.090 "\001\000\000\000\002 \307\336" # Uses: 0 00:07:33.090 ###### End of recommended dictionary. ###### 00:07:33.090 Done 55 runs in 2 second(s) 00:07:33.090 17:16:52 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_5.conf 00:07:33.090 17:16:52 -- ../common.sh@72 -- # (( i++ )) 00:07:33.090 17:16:52 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:33.090 17:16:52 -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:33.090 17:16:52 -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:07:33.090 17:16:52 -- nvmf/run.sh@24 -- # local timen=1 00:07:33.090 17:16:52 -- nvmf/run.sh@25 -- # local core=0x1 00:07:33.090 17:16:52 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:33.090 17:16:52 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:07:33.090 17:16:52 -- nvmf/run.sh@29 -- # printf %02d 6 00:07:33.090 17:16:52 -- nvmf/run.sh@29 -- # port=4406 00:07:33.090 17:16:52 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:33.090 17:16:52 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:07:33.090 17:16:52 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:33.090 17:16:52 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 -r /var/tmp/spdk6.sock 00:07:33.349 [2024-11-09 17:16:52.875856] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:33.349 [2024-11-09 17:16:52.875926] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid368321 ] 00:07:33.349 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.349 [2024-11-09 17:16:53.049290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.349 [2024-11-09 17:16:53.114845] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:33.349 [2024-11-09 17:16:53.114975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.608 [2024-11-09 17:16:53.173002] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.608 [2024-11-09 17:16:53.189327] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:07:33.608 INFO: Running with entropic power schedule (0xFF, 100). 00:07:33.608 INFO: Seed: 2653080634 00:07:33.608 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:33.608 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:33.608 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:33.608 INFO: A corpus is not provided, starting from an empty corpus 00:07:33.608 #2 INITED exec/s: 0 rss: 60Mb 00:07:33.608 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:33.608 This may also happen if the target rejected all inputs we tried so far 00:07:33.608 [2024-11-09 17:16:53.257815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:33.608 [2024-11-09 17:16:53.257850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.868 NEW_FUNC[1/669]: 0x445308 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:07:33.868 NEW_FUNC[2/669]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:33.868 #8 NEW cov: 11529 ft: 11528 corp: 2/3b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 CrossOver- 00:07:33.868 [2024-11-09 17:16:53.588694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000adf cdw11:00000000 00:07:33.868 [2024-11-09 17:16:53.588742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.868 #9 NEW cov: 11642 ft: 12220 corp: 3/5b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 ChangeByte- 00:07:34.127 [2024-11-09 17:16:53.638300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001adf cdw11:00000000 00:07:34.127 [2024-11-09 17:16:53.638330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.127 #10 NEW cov: 11648 ft: 12502 corp: 4/7b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 ChangeBit- 00:07:34.127 [2024-11-09 17:16:53.678836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001acf cdw11:00000000 00:07:34.127 [2024-11-09 17:16:53.678865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.127 #11 NEW cov: 11733 ft: 12832 corp: 5/9b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 ChangeBit- 00:07:34.128 [2024-11-09 17:16:53.718589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000600a cdw11:00000000 00:07:34.128 [2024-11-09 17:16:53.718618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.128 #15 NEW cov: 11733 ft: 12893 corp: 6/11b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 4 EraseBytes-ChangeBinInt-ChangeByte-CrossOver- 00:07:34.128 [2024-11-09 17:16:53.759077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001a1a cdw11:00000000 00:07:34.128 [2024-11-09 17:16:53.759103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.128 #20 NEW cov: 11733 ft: 12959 corp: 7/13b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 5 EraseBytes-ShuffleBytes-ChangeBit-ShuffleBytes-CopyPart- 00:07:34.128 [2024-11-09 17:16:53.799218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:34.128 [2024-11-09 17:16:53.799244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.128 #21 NEW cov: 11733 ft: 13003 corp: 8/15b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 ShuffleBytes- 00:07:34.128 [2024-11-09 17:16:53.839295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001adf cdw11:00000000 00:07:34.128 [2024-11-09 17:16:53.839322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.128 #22 NEW cov: 11733 ft: 13014 corp: 9/17b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 ShuffleBytes- 00:07:34.128 [2024-11-09 17:16:53.879269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000601a cdw11:00000000 00:07:34.128 [2024-11-09 17:16:53.879295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.387 #23 NEW cov: 11733 ft: 13088 corp: 10/19b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 CrossOver- 00:07:34.387 [2024-11-09 17:16:53.919104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000e01a cdw11:00000000 00:07:34.387 [2024-11-09 17:16:53.919130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.387 #24 NEW cov: 11733 ft: 13113 corp: 11/22b lim: 10 exec/s: 0 rss: 68Mb L: 3/3 MS: 1 InsertByte- 00:07:34.387 [2024-11-09 17:16:53.949181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000608d cdw11:00000000 00:07:34.387 [2024-11-09 17:16:53.949208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.387 #25 NEW cov: 11733 ft: 13146 corp: 12/24b lim: 10 exec/s: 0 rss: 68Mb L: 2/3 MS: 1 ChangeByte- 00:07:34.387 [2024-11-09 17:16:53.989261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001a0a cdw11:00000000 00:07:34.387 [2024-11-09 17:16:53.989288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.387 #26 NEW cov: 11733 ft: 13179 corp: 13/26b lim: 10 exec/s: 0 rss: 68Mb L: 2/3 MS: 1 CrossOver- 00:07:34.387 [2024-11-09 17:16:54.019862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000301a cdw11:00000000 00:07:34.387 [2024-11-09 17:16:54.019889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.387 #27 NEW cov: 11733 ft: 13221 corp: 14/29b lim: 10 exec/s: 0 rss: 69Mb L: 3/3 MS: 1 InsertByte- 00:07:34.387 [2024-11-09 17:16:54.059678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001a32 cdw11:00000000 00:07:34.387 [2024-11-09 17:16:54.059704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.387 #28 NEW cov: 11733 ft: 13242 corp: 15/31b lim: 10 exec/s: 0 rss: 69Mb L: 2/3 MS: 1 ChangeByte- 00:07:34.387 [2024-11-09 17:16:54.100047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000601a cdw11:00000000 00:07:34.387 [2024-11-09 17:16:54.100075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.387 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:34.387 #29 NEW cov: 11756 ft: 13297 corp: 16/34b lim: 10 exec/s: 0 rss: 69Mb L: 3/3 MS: 1 CrossOver- 00:07:34.387 [2024-11-09 17:16:54.150215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000601a cdw11:00000000 00:07:34.388 [2024-11-09 17:16:54.150241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.647 #30 NEW cov: 11756 ft: 13350 corp: 17/37b lim: 10 exec/s: 0 rss: 69Mb L: 3/3 MS: 1 ShuffleBytes- 00:07:34.647 [2024-11-09 17:16:54.200475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000601a cdw11:00000000 00:07:34.647 [2024-11-09 17:16:54.200502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.647 #31 NEW cov: 11756 ft: 13360 corp: 18/39b lim: 10 exec/s: 0 rss: 69Mb L: 2/3 MS: 1 ShuffleBytes- 00:07:34.647 [2024-11-09 17:16:54.240152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000608d cdw11:00000000 00:07:34.647 [2024-11-09 17:16:54.240177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.647 #32 NEW cov: 11756 ft: 13374 corp: 19/41b lim: 10 exec/s: 32 rss: 69Mb L: 2/3 MS: 1 CrossOver- 00:07:34.647 [2024-11-09 17:16:54.280266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000e01a cdw11:00000000 00:07:34.647 [2024-11-09 17:16:54.280293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.647 #33 NEW cov: 11756 ft: 13385 corp: 20/43b lim: 10 exec/s: 33 rss: 69Mb L: 2/3 MS: 1 EraseBytes- 00:07:34.647 [2024-11-09 17:16:54.330890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001a1a cdw11:00000000 00:07:34.647 [2024-11-09 17:16:54.330918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.647 #34 NEW cov: 11756 ft: 13444 corp: 21/45b lim: 10 exec/s: 34 rss: 69Mb L: 2/3 MS: 1 CopyPart- 00:07:34.647 [2024-11-09 17:16:54.380997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001adf cdw11:00000000 00:07:34.647 [2024-11-09 17:16:54.381022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.647 #35 NEW cov: 11756 ft: 13503 corp: 22/48b lim: 10 exec/s: 35 rss: 69Mb L: 3/3 MS: 1 CrossOver- 00:07:34.906 [2024-11-09 17:16:54.431167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008ddf cdw11:00000000 00:07:34.906 [2024-11-09 17:16:54.431194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.906 #36 NEW cov: 11756 ft: 13519 corp: 23/50b lim: 10 exec/s: 36 rss: 69Mb L: 2/3 MS: 1 ChangeByte- 00:07:34.906 [2024-11-09 17:16:54.470817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001aff cdw11:00000000 00:07:34.906 [2024-11-09 17:16:54.470845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.906 #37 NEW cov: 11756 ft: 13533 corp: 24/52b lim: 10 exec/s: 37 rss: 69Mb L: 2/3 MS: 1 ChangeBit- 00:07:34.906 [2024-11-09 17:16:54.511039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000060c1 cdw11:00000000 00:07:34.906 [2024-11-09 17:16:54.511065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.906 #38 NEW cov: 11756 ft: 13555 corp: 25/54b lim: 10 exec/s: 38 rss: 69Mb L: 2/3 MS: 1 ChangeByte- 00:07:34.906 [2024-11-09 17:16:54.561628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000e11a cdw11:00000000 00:07:34.906 [2024-11-09 17:16:54.561655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.906 #39 NEW cov: 11756 ft: 13580 corp: 26/57b lim: 10 exec/s: 39 rss: 69Mb L: 3/3 MS: 1 ChangeBit- 00:07:34.906 [2024-11-09 17:16:54.611939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000e11a cdw11:00000000 00:07:34.906 [2024-11-09 17:16:54.611967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.906 [2024-11-09 17:16:54.612083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000df1a cdw11:00000000 00:07:34.907 [2024-11-09 17:16:54.612102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.907 #40 NEW cov: 11756 ft: 13841 corp: 27/61b lim: 10 exec/s: 40 rss: 69Mb L: 4/4 MS: 1 CopyPart- 00:07:34.907 [2024-11-09 17:16:54.672000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00006091 cdw11:00000000 00:07:34.907 [2024-11-09 17:16:54.672028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.166 #41 NEW cov: 11756 ft: 13860 corp: 28/63b lim: 10 exec/s: 41 rss: 69Mb L: 2/4 MS: 1 ChangeByte- 00:07:35.166 [2024-11-09 17:16:54.722138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001adf cdw11:00000000 00:07:35.166 [2024-11-09 17:16:54.722167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.166 #42 NEW cov: 11756 ft: 13875 corp: 29/66b lim: 10 exec/s: 42 rss: 69Mb L: 3/4 MS: 1 InsertByte- 00:07:35.166 [2024-11-09 17:16:54.772186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008ddf cdw11:00000000 00:07:35.166 [2024-11-09 17:16:54.772214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.166 #43 NEW cov: 11756 ft: 13947 corp: 30/69b lim: 10 exec/s: 43 rss: 69Mb L: 3/4 MS: 1 InsertByte- 00:07:35.166 [2024-11-09 17:16:54.812115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001adf cdw11:00000000 00:07:35.166 [2024-11-09 17:16:54.812142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.166 [2024-11-09 17:16:54.812266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00006ddf cdw11:00000000 00:07:35.166 [2024-11-09 17:16:54.812283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.166 #44 NEW cov: 11756 ft: 13953 corp: 31/73b lim: 10 exec/s: 44 rss: 69Mb L: 4/4 MS: 1 CopyPart- 00:07:35.166 [2024-11-09 17:16:54.872576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00006018 cdw11:00000000 00:07:35.166 [2024-11-09 17:16:54.872602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.166 #50 NEW cov: 11756 ft: 13973 corp: 32/75b lim: 10 exec/s: 50 rss: 69Mb L: 2/4 MS: 1 ChangeBit- 00:07:35.166 [2024-11-09 17:16:54.922746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a3f cdw11:00000000 00:07:35.166 [2024-11-09 17:16:54.922774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.425 #51 NEW cov: 11756 ft: 13974 corp: 33/77b lim: 10 exec/s: 51 rss: 69Mb L: 2/4 MS: 1 ChangeByte- 00:07:35.425 [2024-11-09 17:16:54.972817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000301a cdw11:00000000 00:07:35.425 [2024-11-09 17:16:54.972845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.425 #52 NEW cov: 11756 ft: 13999 corp: 34/80b lim: 10 exec/s: 52 rss: 70Mb L: 3/4 MS: 1 ChangeASCIIInt- 00:07:35.425 [2024-11-09 17:16:55.023592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000feff cdw11:00000000 00:07:35.425 [2024-11-09 17:16:55.023619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.425 [2024-11-09 17:16:55.023748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:35.425 [2024-11-09 17:16:55.023766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.425 [2024-11-09 17:16:55.023885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.425 [2024-11-09 17:16:55.023901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.425 [2024-11-09 17:16:55.024016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.425 [2024-11-09 17:16:55.024033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.425 #57 NEW cov: 11756 ft: 14279 corp: 35/89b lim: 10 exec/s: 57 rss: 70Mb L: 9/9 MS: 5 EraseBytes-ShuffleBytes-ShuffleBytes-ShuffleBytes-CMP- DE: "\376\377\377\377\000\000\000\000"- 00:07:35.425 [2024-11-09 17:16:55.083346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00006060 cdw11:00000000 00:07:35.425 [2024-11-09 17:16:55.083373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.425 [2024-11-09 17:16:55.083508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009191 cdw11:00000000 00:07:35.425 [2024-11-09 17:16:55.083529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.425 #58 NEW cov: 11756 ft: 14329 corp: 36/93b lim: 10 exec/s: 58 rss: 70Mb L: 4/9 MS: 1 CopyPart- 00:07:35.425 [2024-11-09 17:16:55.122899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000600a cdw11:00000000 00:07:35.425 [2024-11-09 17:16:55.122927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.425 #59 NEW cov: 11756 ft: 14365 corp: 37/95b lim: 10 exec/s: 59 rss: 70Mb L: 2/9 MS: 1 CrossOver- 00:07:35.425 [2024-11-09 17:16:55.163655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000018fe cdw11:00000000 00:07:35.425 [2024-11-09 17:16:55.163682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.425 [2024-11-09 17:16:55.163798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:35.425 [2024-11-09 17:16:55.163813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.425 [2024-11-09 17:16:55.163930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff00 cdw11:00000000 00:07:35.425 [2024-11-09 17:16:55.163948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.425 [2024-11-09 17:16:55.164070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.425 [2024-11-09 17:16:55.164086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.425 #63 NEW cov: 11756 ft: 14375 corp: 38/104b lim: 10 exec/s: 63 rss: 70Mb L: 9/9 MS: 4 EraseBytes-ChangeByte-CrossOver-PersAutoDict- DE: "\376\377\377\377\000\000\000\000"- 00:07:35.684 [2024-11-09 17:16:55.203364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00006060 cdw11:00000000 00:07:35.684 [2024-11-09 17:16:55.203392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.684 [2024-11-09 17:16:55.203509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009191 cdw11:00000000 00:07:35.684 [2024-11-09 17:16:55.203531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.684 #64 NEW cov: 11756 ft: 14379 corp: 39/108b lim: 10 exec/s: 64 rss: 70Mb L: 4/9 MS: 1 ShuffleBytes- 00:07:35.684 [2024-11-09 17:16:55.243185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001a0a cdw11:00000000 00:07:35.684 [2024-11-09 17:16:55.243213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.684 #65 NEW cov: 11756 ft: 14383 corp: 40/110b lim: 10 exec/s: 32 rss: 70Mb L: 2/9 MS: 1 ChangeBit- 00:07:35.684 #65 DONE cov: 11756 ft: 14383 corp: 40/110b lim: 10 exec/s: 32 rss: 70Mb 00:07:35.684 ###### Recommended dictionary. ###### 00:07:35.684 "\376\377\377\377\000\000\000\000" # Uses: 1 00:07:35.684 ###### End of recommended dictionary. ###### 00:07:35.684 Done 65 runs in 2 second(s) 00:07:35.684 17:16:55 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_6.conf 00:07:35.684 17:16:55 -- ../common.sh@72 -- # (( i++ )) 00:07:35.684 17:16:55 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:35.684 17:16:55 -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:07:35.684 17:16:55 -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:07:35.684 17:16:55 -- nvmf/run.sh@24 -- # local timen=1 00:07:35.684 17:16:55 -- nvmf/run.sh@25 -- # local core=0x1 00:07:35.684 17:16:55 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:35.684 17:16:55 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:07:35.684 17:16:55 -- nvmf/run.sh@29 -- # printf %02d 7 00:07:35.684 17:16:55 -- nvmf/run.sh@29 -- # port=4407 00:07:35.684 17:16:55 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:35.684 17:16:55 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:07:35.685 17:16:55 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:35.685 17:16:55 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 -r /var/tmp/spdk7.sock 00:07:35.685 [2024-11-09 17:16:55.435438] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:35.685 [2024-11-09 17:16:55.435515] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid368858 ] 00:07:35.944 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.944 [2024-11-09 17:16:55.613751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.944 [2024-11-09 17:16:55.676858] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:35.944 [2024-11-09 17:16:55.676980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.203 [2024-11-09 17:16:55.735035] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.203 [2024-11-09 17:16:55.751341] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:07:36.203 INFO: Running with entropic power schedule (0xFF, 100). 00:07:36.203 INFO: Seed: 922111133 00:07:36.203 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:36.203 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:36.203 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:36.203 INFO: A corpus is not provided, starting from an empty corpus 00:07:36.203 #2 INITED exec/s: 0 rss: 61Mb 00:07:36.203 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:36.203 This may also happen if the target rejected all inputs we tried so far 00:07:36.203 [2024-11-09 17:16:55.806553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:36.203 [2024-11-09 17:16:55.806582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.463 NEW_FUNC[1/669]: 0x445d08 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:07:36.463 NEW_FUNC[2/669]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:36.463 #3 NEW cov: 11529 ft: 11530 corp: 2/3b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 CrossOver- 00:07:36.463 [2024-11-09 17:16:56.107367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:36.463 [2024-11-09 17:16:56.107406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.463 #4 NEW cov: 11642 ft: 12053 corp: 3/6b lim: 10 exec/s: 0 rss: 68Mb L: 3/3 MS: 1 CopyPart- 00:07:36.463 [2024-11-09 17:16:56.147511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:36.463 [2024-11-09 17:16:56.147544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.463 [2024-11-09 17:16:56.147595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:36.463 [2024-11-09 17:16:56.147608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.463 #5 NEW cov: 11648 ft: 12515 corp: 4/11b lim: 10 exec/s: 0 rss: 68Mb L: 5/5 MS: 1 CrossOver- 00:07:36.463 [2024-11-09 17:16:56.187447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a2f cdw11:00000000 00:07:36.463 [2024-11-09 17:16:56.187475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.463 #6 NEW cov: 11733 ft: 12902 corp: 5/13b lim: 10 exec/s: 0 rss: 68Mb L: 2/5 MS: 1 ChangeByte- 00:07:36.463 [2024-11-09 17:16:56.227724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:36.463 [2024-11-09 17:16:56.227750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.463 [2024-11-09 17:16:56.227803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:36.463 [2024-11-09 17:16:56.227817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.723 #7 NEW cov: 11733 ft: 13004 corp: 6/17b lim: 10 exec/s: 0 rss: 68Mb L: 4/5 MS: 1 CopyPart- 00:07:36.723 [2024-11-09 17:16:56.267707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:36.723 [2024-11-09 17:16:56.267735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.723 #8 NEW cov: 11733 ft: 13184 corp: 7/20b lim: 10 exec/s: 0 rss: 68Mb L: 3/5 MS: 1 ChangeBit- 00:07:36.723 [2024-11-09 17:16:56.308022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:36.723 [2024-11-09 17:16:56.308047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.723 [2024-11-09 17:16:56.308102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000afc cdw11:00000000 00:07:36.723 [2024-11-09 17:16:56.308116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.723 [2024-11-09 17:16:56.308170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000fcfc cdw11:00000000 00:07:36.723 [2024-11-09 17:16:56.308183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.723 #9 NEW cov: 11733 ft: 13474 corp: 8/27b lim: 10 exec/s: 0 rss: 68Mb L: 7/7 MS: 1 InsertRepeatedBytes- 00:07:36.723 [2024-11-09 17:16:56.347884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a2f cdw11:00000000 00:07:36.723 [2024-11-09 17:16:56.347909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.723 #10 NEW cov: 11733 ft: 13528 corp: 9/29b lim: 10 exec/s: 0 rss: 69Mb L: 2/7 MS: 1 ShuffleBytes- 00:07:36.723 [2024-11-09 17:16:56.388012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005b0a cdw11:00000000 00:07:36.723 [2024-11-09 17:16:56.388037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.723 #11 NEW cov: 11733 ft: 13606 corp: 10/31b lim: 10 exec/s: 0 rss: 69Mb L: 2/7 MS: 1 InsertByte- 00:07:36.723 [2024-11-09 17:16:56.428363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000add cdw11:00000000 00:07:36.723 [2024-11-09 17:16:56.428388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.723 [2024-11-09 17:16:56.428443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000dddd cdw11:00000000 00:07:36.723 [2024-11-09 17:16:56.428456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.723 [2024-11-09 17:16:56.428510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000dd0a cdw11:00000000 00:07:36.723 [2024-11-09 17:16:56.428528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.723 #12 NEW cov: 11733 ft: 13659 corp: 11/37b lim: 10 exec/s: 0 rss: 69Mb L: 6/7 MS: 1 InsertRepeatedBytes- 00:07:36.723 [2024-11-09 17:16:56.468624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:36.723 [2024-11-09 17:16:56.468650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.723 [2024-11-09 17:16:56.468704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a9d cdw11:00000000 00:07:36.723 [2024-11-09 17:16:56.468718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.723 [2024-11-09 17:16:56.468771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00009d9d cdw11:00000000 00:07:36.723 [2024-11-09 17:16:56.468785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.723 [2024-11-09 17:16:56.468837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00009d0a cdw11:00000000 00:07:36.723 [2024-11-09 17:16:56.468850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.983 #13 NEW cov: 11733 ft: 13925 corp: 12/46b lim: 10 exec/s: 0 rss: 69Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:07:36.983 [2024-11-09 17:16:56.508349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005b49 cdw11:00000000 00:07:36.983 [2024-11-09 17:16:56.508374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.983 #14 NEW cov: 11733 ft: 13954 corp: 13/48b lim: 10 exec/s: 0 rss: 69Mb L: 2/9 MS: 1 ChangeByte- 00:07:36.983 [2024-11-09 17:16:56.548834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005b00 cdw11:00000000 00:07:36.983 [2024-11-09 17:16:56.548859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.983 [2024-11-09 17:16:56.548912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:36.983 [2024-11-09 17:16:56.548925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.983 [2024-11-09 17:16:56.548976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:36.983 [2024-11-09 17:16:56.548989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.983 [2024-11-09 17:16:56.549043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000049 cdw11:00000000 00:07:36.983 [2024-11-09 17:16:56.549055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.983 #15 NEW cov: 11733 ft: 14004 corp: 14/56b lim: 10 exec/s: 0 rss: 69Mb L: 8/9 MS: 1 InsertRepeatedBytes- 00:07:36.983 [2024-11-09 17:16:56.588933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:07:36.983 [2024-11-09 17:16:56.588958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.983 [2024-11-09 17:16:56.589009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:36.983 [2024-11-09 17:16:56.589022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.983 [2024-11-09 17:16:56.589073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:36.983 [2024-11-09 17:16:56.589089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.983 [2024-11-09 17:16:56.589139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff2f cdw11:00000000 00:07:36.983 [2024-11-09 17:16:56.589152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.983 #16 NEW cov: 11733 ft: 14020 corp: 15/64b lim: 10 exec/s: 0 rss: 69Mb L: 8/9 MS: 1 InsertRepeatedBytes- 00:07:36.983 [2024-11-09 17:16:56.629062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:07:36.983 [2024-11-09 17:16:56.629086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.983 [2024-11-09 17:16:56.629157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:36.983 [2024-11-09 17:16:56.629171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.983 [2024-11-09 17:16:56.629221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:36.983 [2024-11-09 17:16:56.629234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.983 [2024-11-09 17:16:56.629285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000df2f cdw11:00000000 00:07:36.983 [2024-11-09 17:16:56.629298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.983 #17 NEW cov: 11733 ft: 14037 corp: 16/72b lim: 10 exec/s: 0 rss: 69Mb L: 8/9 MS: 1 ChangeBit- 00:07:36.983 [2024-11-09 17:16:56.668819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005b29 cdw11:00000000 00:07:36.983 [2024-11-09 17:16:56.668844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.983 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:36.983 #18 NEW cov: 11756 ft: 14086 corp: 17/75b lim: 10 exec/s: 0 rss: 69Mb L: 3/9 MS: 1 InsertByte- 00:07:36.983 [2024-11-09 17:16:56.709065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a4a cdw11:00000000 00:07:36.983 [2024-11-09 17:16:56.709089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.983 [2024-11-09 17:16:56.709143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:36.983 [2024-11-09 17:16:56.709156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.983 #19 NEW cov: 11756 ft: 14141 corp: 18/80b lim: 10 exec/s: 0 rss: 69Mb L: 5/9 MS: 1 ChangeBit- 00:07:36.983 [2024-11-09 17:16:56.749446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff2f cdw11:00000000 00:07:36.983 [2024-11-09 17:16:56.749472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.983 [2024-11-09 17:16:56.749529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffdf cdw11:00000000 00:07:36.983 [2024-11-09 17:16:56.749543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.983 [2024-11-09 17:16:56.749593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:36.983 [2024-11-09 17:16:56.749606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.983 [2024-11-09 17:16:56.749660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:36.983 [2024-11-09 17:16:56.749676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.243 #20 NEW cov: 11756 ft: 14164 corp: 19/88b lim: 10 exec/s: 0 rss: 69Mb L: 8/9 MS: 1 ShuffleBytes- 00:07:37.243 [2024-11-09 17:16:56.789179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:37.243 [2024-11-09 17:16:56.789204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.243 #21 NEW cov: 11756 ft: 14171 corp: 20/90b lim: 10 exec/s: 21 rss: 69Mb L: 2/9 MS: 1 ShuffleBytes- 00:07:37.243 [2024-11-09 17:16:56.819256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004a2f cdw11:00000000 00:07:37.243 [2024-11-09 17:16:56.819280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.243 #22 NEW cov: 11756 ft: 14217 corp: 21/92b lim: 10 exec/s: 22 rss: 69Mb L: 2/9 MS: 1 ChangeBit- 00:07:37.243 [2024-11-09 17:16:56.859639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:37.243 [2024-11-09 17:16:56.859664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.243 [2024-11-09 17:16:56.859716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a4a cdw11:00000000 00:07:37.243 [2024-11-09 17:16:56.859730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.243 [2024-11-09 17:16:56.859781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:37.243 [2024-11-09 17:16:56.859794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.243 #23 NEW cov: 11756 ft: 14247 corp: 22/99b lim: 10 exec/s: 23 rss: 69Mb L: 7/9 MS: 1 CrossOver- 00:07:37.243 [2024-11-09 17:16:56.899635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:37.243 [2024-11-09 17:16:56.899660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.243 [2024-11-09 17:16:56.899715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004a0a cdw11:00000000 00:07:37.243 [2024-11-09 17:16:56.899729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.243 #24 NEW cov: 11756 ft: 14289 corp: 23/104b lim: 10 exec/s: 24 rss: 69Mb L: 5/9 MS: 1 ChangeBit- 00:07:37.243 [2024-11-09 17:16:56.939655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:37.243 [2024-11-09 17:16:56.939680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.243 #25 NEW cov: 11756 ft: 14331 corp: 24/107b lim: 10 exec/s: 25 rss: 69Mb L: 3/9 MS: 1 EraseBytes- 00:07:37.243 [2024-11-09 17:16:56.979778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d45b cdw11:00000000 00:07:37.243 [2024-11-09 17:16:56.979803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.243 #26 NEW cov: 11756 ft: 14351 corp: 25/110b lim: 10 exec/s: 26 rss: 69Mb L: 3/9 MS: 1 InsertByte- 00:07:37.503 [2024-11-09 17:16:57.020122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:37.503 [2024-11-09 17:16:57.020147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.503 [2024-11-09 17:16:57.020199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a4a cdw11:00000000 00:07:37.503 [2024-11-09 17:16:57.020215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.503 [2024-11-09 17:16:57.020269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:37.503 [2024-11-09 17:16:57.020282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.503 #27 NEW cov: 11756 ft: 14374 corp: 26/117b lim: 10 exec/s: 27 rss: 70Mb L: 7/9 MS: 1 CrossOver- 00:07:37.503 [2024-11-09 17:16:57.059985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:37.503 [2024-11-09 17:16:57.060010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.503 #28 NEW cov: 11756 ft: 14396 corp: 27/119b lim: 10 exec/s: 28 rss: 70Mb L: 2/9 MS: 1 CrossOver- 00:07:37.503 [2024-11-09 17:16:57.090133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a2d cdw11:00000000 00:07:37.503 [2024-11-09 17:16:57.090158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.503 #29 NEW cov: 11756 ft: 14436 corp: 28/121b lim: 10 exec/s: 29 rss: 70Mb L: 2/9 MS: 1 ChangeBit- 00:07:37.503 [2024-11-09 17:16:57.130370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a4a cdw11:00000000 00:07:37.503 [2024-11-09 17:16:57.130395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.503 [2024-11-09 17:16:57.130461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a21 cdw11:00000000 00:07:37.503 [2024-11-09 17:16:57.130475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.503 #30 NEW cov: 11756 ft: 14444 corp: 29/126b lim: 10 exec/s: 30 rss: 70Mb L: 5/9 MS: 1 ChangeByte- 00:07:37.503 [2024-11-09 17:16:57.170751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ab8 cdw11:00000000 00:07:37.503 [2024-11-09 17:16:57.170775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.503 [2024-11-09 17:16:57.170843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000b8b8 cdw11:00000000 00:07:37.503 [2024-11-09 17:16:57.170857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.503 [2024-11-09 17:16:57.170909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000b8b8 cdw11:00000000 00:07:37.503 [2024-11-09 17:16:57.170923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.503 [2024-11-09 17:16:57.170975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000b8b8 cdw11:00000000 00:07:37.503 [2024-11-09 17:16:57.170989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.503 #32 NEW cov: 11756 ft: 14465 corp: 30/134b lim: 10 exec/s: 32 rss: 70Mb L: 8/9 MS: 2 EraseBytes-InsertRepeatedBytes- 00:07:37.503 [2024-11-09 17:16:57.210865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a4a cdw11:00000000 00:07:37.503 [2024-11-09 17:16:57.210890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.503 [2024-11-09 17:16:57.210957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:37.503 [2024-11-09 17:16:57.210971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.503 [2024-11-09 17:16:57.211027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:37.503 [2024-11-09 17:16:57.211040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.503 [2024-11-09 17:16:57.211092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:37.503 [2024-11-09 17:16:57.211105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.503 #33 NEW cov: 11756 ft: 14483 corp: 31/143b lim: 10 exec/s: 33 rss: 70Mb L: 9/9 MS: 1 CrossOver- 00:07:37.503 [2024-11-09 17:16:57.251015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005b00 cdw11:00000000 00:07:37.503 [2024-11-09 17:16:57.251039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.503 [2024-11-09 17:16:57.251095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:37.503 [2024-11-09 17:16:57.251108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.503 [2024-11-09 17:16:57.251163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:37.503 [2024-11-09 17:16:57.251176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.504 [2024-11-09 17:16:57.251228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000049 cdw11:00000000 00:07:37.504 [2024-11-09 17:16:57.251241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.763 #34 NEW cov: 11756 ft: 14498 corp: 32/151b lim: 10 exec/s: 34 rss: 70Mb L: 8/9 MS: 1 ShuffleBytes- 00:07:37.763 [2024-11-09 17:16:57.290910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f6ac cdw11:00000000 00:07:37.763 [2024-11-09 17:16:57.290936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.763 [2024-11-09 17:16:57.290988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:37.763 [2024-11-09 17:16:57.291002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.763 #35 NEW cov: 11756 ft: 14499 corp: 33/156b lim: 10 exec/s: 35 rss: 70Mb L: 5/9 MS: 1 ChangeBinInt- 00:07:37.763 [2024-11-09 17:16:57.330999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f6ac cdw11:00000000 00:07:37.763 [2024-11-09 17:16:57.331024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.763 [2024-11-09 17:16:57.331078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000460a cdw11:00000000 00:07:37.763 [2024-11-09 17:16:57.331092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.763 #36 NEW cov: 11756 ft: 14553 corp: 34/161b lim: 10 exec/s: 36 rss: 70Mb L: 5/9 MS: 1 ChangeByte- 00:07:37.763 [2024-11-09 17:16:57.371352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:37.763 [2024-11-09 17:16:57.371376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.763 [2024-11-09 17:16:57.371430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000afc cdw11:00000000 00:07:37.763 [2024-11-09 17:16:57.371443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.763 [2024-11-09 17:16:57.371501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000029fc cdw11:00000000 00:07:37.764 [2024-11-09 17:16:57.371514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.764 [2024-11-09 17:16:57.371550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000fc0a cdw11:00000000 00:07:37.764 [2024-11-09 17:16:57.371563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.764 #37 NEW cov: 11756 ft: 14578 corp: 35/169b lim: 10 exec/s: 37 rss: 70Mb L: 8/9 MS: 1 InsertByte- 00:07:37.764 [2024-11-09 17:16:57.411329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:37.764 [2024-11-09 17:16:57.411354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.764 [2024-11-09 17:16:57.411407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:37.764 [2024-11-09 17:16:57.411420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.764 [2024-11-09 17:16:57.411472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:37.764 [2024-11-09 17:16:57.411485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.764 #38 NEW cov: 11756 ft: 14580 corp: 36/176b lim: 10 exec/s: 38 rss: 70Mb L: 7/9 MS: 1 InsertRepeatedBytes- 00:07:37.764 [2024-11-09 17:16:57.451590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:37.764 [2024-11-09 17:16:57.451615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.764 [2024-11-09 17:16:57.451669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000afc cdw11:00000000 00:07:37.764 [2024-11-09 17:16:57.451683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.764 [2024-11-09 17:16:57.451738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000029fc cdw11:00000000 00:07:37.764 [2024-11-09 17:16:57.451751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.764 [2024-11-09 17:16:57.451804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000fc21 cdw11:00000000 00:07:37.764 [2024-11-09 17:16:57.451817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.764 #39 NEW cov: 11756 ft: 14582 corp: 37/185b lim: 10 exec/s: 39 rss: 70Mb L: 9/9 MS: 1 InsertByte- 00:07:37.764 [2024-11-09 17:16:57.491316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005b29 cdw11:00000000 00:07:37.764 [2024-11-09 17:16:57.491341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.764 #40 NEW cov: 11756 ft: 14595 corp: 38/188b lim: 10 exec/s: 40 rss: 70Mb L: 3/9 MS: 1 ChangeBinInt- 00:07:37.764 [2024-11-09 17:16:57.531755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:37.764 [2024-11-09 17:16:57.531781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.764 [2024-11-09 17:16:57.531837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:37.764 [2024-11-09 17:16:57.531850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.764 [2024-11-09 17:16:57.531907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:37.764 [2024-11-09 17:16:57.531920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.023 #41 NEW cov: 11756 ft: 14600 corp: 39/195b lim: 10 exec/s: 41 rss: 70Mb L: 7/9 MS: 1 ShuffleBytes- 00:07:38.024 [2024-11-09 17:16:57.571916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005b00 cdw11:00000000 00:07:38.024 [2024-11-09 17:16:57.571941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.024 [2024-11-09 17:16:57.571997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:38.024 [2024-11-09 17:16:57.572010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.024 [2024-11-09 17:16:57.572064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:38.024 [2024-11-09 17:16:57.572077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.024 [2024-11-09 17:16:57.572129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00002b49 cdw11:00000000 00:07:38.024 [2024-11-09 17:16:57.572142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.024 #42 NEW cov: 11756 ft: 14612 corp: 40/203b lim: 10 exec/s: 42 rss: 70Mb L: 8/9 MS: 1 ChangeByte- 00:07:38.024 [2024-11-09 17:16:57.611801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a79 cdw11:00000000 00:07:38.024 [2024-11-09 17:16:57.611826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.024 [2024-11-09 17:16:57.611894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:38.024 [2024-11-09 17:16:57.611907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.024 #43 NEW cov: 11756 ft: 14678 corp: 41/207b lim: 10 exec/s: 43 rss: 70Mb L: 4/9 MS: 1 InsertByte- 00:07:38.024 [2024-11-09 17:16:57.652038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005b00 cdw11:00000000 00:07:38.024 [2024-11-09 17:16:57.652063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.024 [2024-11-09 17:16:57.652113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:38.024 [2024-11-09 17:16:57.652126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.024 [2024-11-09 17:16:57.652177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00002b49 cdw11:00000000 00:07:38.024 [2024-11-09 17:16:57.652190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.024 #44 NEW cov: 11756 ft: 14680 corp: 42/213b lim: 10 exec/s: 44 rss: 70Mb L: 6/9 MS: 1 EraseBytes- 00:07:38.024 [2024-11-09 17:16:57.691900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:38.024 [2024-11-09 17:16:57.691925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.024 #45 NEW cov: 11756 ft: 14717 corp: 43/216b lim: 10 exec/s: 45 rss: 70Mb L: 3/9 MS: 1 ShuffleBytes- 00:07:38.024 [2024-11-09 17:16:57.732149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a40 cdw11:00000000 00:07:38.024 [2024-11-09 17:16:57.732177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.024 [2024-11-09 17:16:57.732245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:38.024 [2024-11-09 17:16:57.732258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.024 #46 NEW cov: 11756 ft: 14740 corp: 44/220b lim: 10 exec/s: 46 rss: 70Mb L: 4/9 MS: 1 ChangeByte- 00:07:38.024 [2024-11-09 17:16:57.772476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005b00 cdw11:00000000 00:07:38.024 [2024-11-09 17:16:57.772501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.024 [2024-11-09 17:16:57.772550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:38.024 [2024-11-09 17:16:57.772564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.024 [2024-11-09 17:16:57.772615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:38.024 [2024-11-09 17:16:57.772644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.024 [2024-11-09 17:16:57.772697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a49 cdw11:00000000 00:07:38.024 [2024-11-09 17:16:57.772710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.283 #47 NEW cov: 11756 ft: 14746 corp: 45/228b lim: 10 exec/s: 23 rss: 70Mb L: 8/9 MS: 1 CrossOver- 00:07:38.283 #47 DONE cov: 11756 ft: 14746 corp: 45/228b lim: 10 exec/s: 23 rss: 70Mb 00:07:38.283 Done 47 runs in 2 second(s) 00:07:38.283 17:16:57 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_7.conf 00:07:38.283 17:16:57 -- ../common.sh@72 -- # (( i++ )) 00:07:38.283 17:16:57 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:38.283 17:16:57 -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:07:38.283 17:16:57 -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:07:38.283 17:16:57 -- nvmf/run.sh@24 -- # local timen=1 00:07:38.283 17:16:57 -- nvmf/run.sh@25 -- # local core=0x1 00:07:38.283 17:16:57 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:38.283 17:16:57 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:07:38.283 17:16:57 -- nvmf/run.sh@29 -- # printf %02d 8 00:07:38.283 17:16:57 -- nvmf/run.sh@29 -- # port=4408 00:07:38.283 17:16:57 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:38.283 17:16:57 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:07:38.283 17:16:57 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:38.283 17:16:57 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 -r /var/tmp/spdk8.sock 00:07:38.283 [2024-11-09 17:16:57.959127] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:38.283 [2024-11-09 17:16:57.959199] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid369338 ] 00:07:38.283 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.541 [2024-11-09 17:16:58.135490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.541 [2024-11-09 17:16:58.198976] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:38.541 [2024-11-09 17:16:58.199101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.541 [2024-11-09 17:16:58.256880] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.541 [2024-11-09 17:16:58.273253] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:07:38.541 INFO: Running with entropic power schedule (0xFF, 100). 00:07:38.541 INFO: Seed: 3442102875 00:07:38.541 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:38.541 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:38.541 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:38.541 INFO: A corpus is not provided, starting from an empty corpus 00:07:38.801 [2024-11-09 17:16:58.321883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.801 [2024-11-09 17:16:58.321913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.801 #2 INITED cov: 11557 ft: 11558 corp: 1/1b exec/s: 0 rss: 66Mb 00:07:38.801 [2024-11-09 17:16:58.351866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.801 [2024-11-09 17:16:58.351892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.801 #3 NEW cov: 11670 ft: 11875 corp: 2/2b lim: 5 exec/s: 0 rss: 66Mb L: 1/1 MS: 1 CopyPart- 00:07:38.801 [2024-11-09 17:16:58.392153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.801 [2024-11-09 17:16:58.392178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.801 [2024-11-09 17:16:58.392233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.801 [2024-11-09 17:16:58.392246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.801 #4 NEW cov: 11676 ft: 12799 corp: 3/4b lim: 5 exec/s: 0 rss: 67Mb L: 2/2 MS: 1 InsertByte- 00:07:38.801 [2024-11-09 17:16:58.432109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.801 [2024-11-09 17:16:58.432134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.801 #5 NEW cov: 11761 ft: 13055 corp: 4/5b lim: 5 exec/s: 0 rss: 68Mb L: 1/2 MS: 1 ChangeByte- 00:07:38.801 [2024-11-09 17:16:58.472503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.801 [2024-11-09 17:16:58.472532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.801 [2024-11-09 17:16:58.472586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.801 [2024-11-09 17:16:58.472600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.801 [2024-11-09 17:16:58.472654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.801 [2024-11-09 17:16:58.472667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.801 #6 NEW cov: 11761 ft: 13474 corp: 5/8b lim: 5 exec/s: 0 rss: 68Mb L: 3/3 MS: 1 CopyPart- 00:07:38.801 [2024-11-09 17:16:58.512335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.801 [2024-11-09 17:16:58.512363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.801 #7 NEW cov: 11761 ft: 13562 corp: 6/9b lim: 5 exec/s: 0 rss: 68Mb L: 1/3 MS: 1 CrossOver- 00:07:38.801 [2024-11-09 17:16:58.542869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.801 [2024-11-09 17:16:58.542895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.801 [2024-11-09 17:16:58.542949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.801 [2024-11-09 17:16:58.542962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.801 [2024-11-09 17:16:58.543014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.801 [2024-11-09 17:16:58.543027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.801 [2024-11-09 17:16:58.543079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.801 [2024-11-09 17:16:58.543092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.060 #8 NEW cov: 11761 ft: 13885 corp: 7/13b lim: 5 exec/s: 0 rss: 68Mb L: 4/4 MS: 1 InsertByte- 00:07:39.060 [2024-11-09 17:16:58.592729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.060 [2024-11-09 17:16:58.592754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.060 [2024-11-09 17:16:58.592806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.060 [2024-11-09 17:16:58.592819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.060 #9 NEW cov: 11761 ft: 13948 corp: 8/15b lim: 5 exec/s: 0 rss: 68Mb L: 2/4 MS: 1 CopyPart- 00:07:39.060 [2024-11-09 17:16:58.632737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.060 [2024-11-09 17:16:58.632762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.060 #10 NEW cov: 11761 ft: 14024 corp: 9/16b lim: 5 exec/s: 0 rss: 68Mb L: 1/4 MS: 1 ShuffleBytes- 00:07:39.060 [2024-11-09 17:16:58.663032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.060 [2024-11-09 17:16:58.663057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.060 [2024-11-09 17:16:58.663109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.060 [2024-11-09 17:16:58.663123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.060 [2024-11-09 17:16:58.663176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.060 [2024-11-09 17:16:58.663206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.060 #11 NEW cov: 11761 ft: 14053 corp: 10/19b lim: 5 exec/s: 0 rss: 68Mb L: 3/4 MS: 1 ChangeByte- 00:07:39.060 [2024-11-09 17:16:58.703317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.060 [2024-11-09 17:16:58.703343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.060 [2024-11-09 17:16:58.703396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.060 [2024-11-09 17:16:58.703410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.060 [2024-11-09 17:16:58.703462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.060 [2024-11-09 17:16:58.703474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.060 [2024-11-09 17:16:58.703532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.060 [2024-11-09 17:16:58.703546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.060 #12 NEW cov: 11761 ft: 14072 corp: 11/23b lim: 5 exec/s: 0 rss: 68Mb L: 4/4 MS: 1 CrossOver- 00:07:39.060 [2024-11-09 17:16:58.743120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.060 [2024-11-09 17:16:58.743146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.060 [2024-11-09 17:16:58.743201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.060 [2024-11-09 17:16:58.743214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.060 #13 NEW cov: 11761 ft: 14141 corp: 12/25b lim: 5 exec/s: 0 rss: 68Mb L: 2/4 MS: 1 CrossOver- 00:07:39.060 [2024-11-09 17:16:58.783580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.060 [2024-11-09 17:16:58.783606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.060 [2024-11-09 17:16:58.783660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.060 [2024-11-09 17:16:58.783673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.060 [2024-11-09 17:16:58.783726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.060 [2024-11-09 17:16:58.783739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.060 [2024-11-09 17:16:58.783791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.060 [2024-11-09 17:16:58.783804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.060 #14 NEW cov: 11761 ft: 14156 corp: 13/29b lim: 5 exec/s: 0 rss: 68Mb L: 4/4 MS: 1 InsertByte- 00:07:39.060 [2024-11-09 17:16:58.823237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.060 [2024-11-09 17:16:58.823265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.320 #15 NEW cov: 11761 ft: 14207 corp: 14/30b lim: 5 exec/s: 0 rss: 68Mb L: 1/4 MS: 1 ShuffleBytes- 00:07:39.320 [2024-11-09 17:16:58.863665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.320 [2024-11-09 17:16:58.863690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.320 [2024-11-09 17:16:58.863743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.320 [2024-11-09 17:16:58.863756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.320 [2024-11-09 17:16:58.863808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.320 [2024-11-09 17:16:58.863820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.320 #16 NEW cov: 11761 ft: 14231 corp: 15/33b lim: 5 exec/s: 0 rss: 68Mb L: 3/4 MS: 1 CrossOver- 00:07:39.320 [2024-11-09 17:16:58.903477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.320 [2024-11-09 17:16:58.903501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.320 #17 NEW cov: 11761 ft: 14256 corp: 16/34b lim: 5 exec/s: 0 rss: 68Mb L: 1/4 MS: 1 CopyPart- 00:07:39.320 [2024-11-09 17:16:58.934045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.320 [2024-11-09 17:16:58.934070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.320 [2024-11-09 17:16:58.934122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.320 [2024-11-09 17:16:58.934136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.320 [2024-11-09 17:16:58.934188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.320 [2024-11-09 17:16:58.934216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.320 [2024-11-09 17:16:58.934267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.320 [2024-11-09 17:16:58.934280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.320 #18 NEW cov: 11761 ft: 14267 corp: 17/38b lim: 5 exec/s: 0 rss: 68Mb L: 4/4 MS: 1 ChangeBit- 00:07:39.320 [2024-11-09 17:16:58.974113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.320 [2024-11-09 17:16:58.974138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.320 [2024-11-09 17:16:58.974193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.320 [2024-11-09 17:16:58.974206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.320 [2024-11-09 17:16:58.974262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.320 [2024-11-09 17:16:58.974275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.320 [2024-11-09 17:16:58.974327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.320 [2024-11-09 17:16:58.974339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.320 #19 NEW cov: 11761 ft: 14286 corp: 18/42b lim: 5 exec/s: 0 rss: 69Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:39.320 [2024-11-09 17:16:59.013954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.320 [2024-11-09 17:16:59.013979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.320 [2024-11-09 17:16:59.014034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.320 [2024-11-09 17:16:59.014047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.320 #20 NEW cov: 11761 ft: 14297 corp: 19/44b lim: 5 exec/s: 0 rss: 69Mb L: 2/4 MS: 1 ShuffleBytes- 00:07:39.320 [2024-11-09 17:16:59.054045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.320 [2024-11-09 17:16:59.054071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.320 [2024-11-09 17:16:59.054127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.320 [2024-11-09 17:16:59.054141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.320 #21 NEW cov: 11761 ft: 14321 corp: 20/46b lim: 5 exec/s: 0 rss: 69Mb L: 2/4 MS: 1 ShuffleBytes- 00:07:39.580 [2024-11-09 17:16:59.094025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.580 [2024-11-09 17:16:59.094051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.580 #22 NEW cov: 11761 ft: 14392 corp: 21/47b lim: 5 exec/s: 0 rss: 69Mb L: 1/4 MS: 1 ChangeBinInt- 00:07:39.580 [2024-11-09 17:16:59.134767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.580 [2024-11-09 17:16:59.134793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.580 [2024-11-09 17:16:59.134849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.580 [2024-11-09 17:16:59.134863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.580 [2024-11-09 17:16:59.134917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.580 [2024-11-09 17:16:59.134931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.580 [2024-11-09 17:16:59.134984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.580 [2024-11-09 17:16:59.135000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.580 [2024-11-09 17:16:59.135054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.580 [2024-11-09 17:16:59.135067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:39.580 #23 NEW cov: 11761 ft: 14482 corp: 22/52b lim: 5 exec/s: 0 rss: 69Mb L: 5/5 MS: 1 CrossOver- 00:07:39.580 [2024-11-09 17:16:59.174414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.580 [2024-11-09 17:16:59.174439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.580 [2024-11-09 17:16:59.174495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.580 [2024-11-09 17:16:59.174509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.580 #24 NEW cov: 11761 ft: 14499 corp: 23/54b lim: 5 exec/s: 0 rss: 69Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:39.580 [2024-11-09 17:16:59.214798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.580 [2024-11-09 17:16:59.214824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.580 [2024-11-09 17:16:59.214877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.580 [2024-11-09 17:16:59.214890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.580 [2024-11-09 17:16:59.214941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.580 [2024-11-09 17:16:59.214954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.580 [2024-11-09 17:16:59.215008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.580 [2024-11-09 17:16:59.215021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.839 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:39.839 #25 NEW cov: 11784 ft: 14558 corp: 24/58b lim: 5 exec/s: 25 rss: 70Mb L: 4/5 MS: 1 ChangeByte- 00:07:39.839 [2024-11-09 17:16:59.515408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.839 [2024-11-09 17:16:59.515445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.839 [2024-11-09 17:16:59.515506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.839 [2024-11-09 17:16:59.515528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.839 #26 NEW cov: 11784 ft: 14569 corp: 25/60b lim: 5 exec/s: 26 rss: 70Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:39.839 [2024-11-09 17:16:59.565633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.839 [2024-11-09 17:16:59.565663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.839 [2024-11-09 17:16:59.565721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.839 [2024-11-09 17:16:59.565735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.839 [2024-11-09 17:16:59.565790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.839 [2024-11-09 17:16:59.565803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.839 #27 NEW cov: 11784 ft: 14635 corp: 26/63b lim: 5 exec/s: 27 rss: 70Mb L: 3/5 MS: 1 InsertByte- 00:07:39.839 [2024-11-09 17:16:59.605936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.839 [2024-11-09 17:16:59.605964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.840 [2024-11-09 17:16:59.606024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.840 [2024-11-09 17:16:59.606038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.840 [2024-11-09 17:16:59.606097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.840 [2024-11-09 17:16:59.606110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.840 [2024-11-09 17:16:59.606165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.840 [2024-11-09 17:16:59.606178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.098 #28 NEW cov: 11784 ft: 14641 corp: 27/67b lim: 5 exec/s: 28 rss: 70Mb L: 4/5 MS: 1 InsertByte- 00:07:40.098 [2024-11-09 17:16:59.645672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.098 [2024-11-09 17:16:59.645698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.098 [2024-11-09 17:16:59.645754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.098 [2024-11-09 17:16:59.645767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.098 #29 NEW cov: 11784 ft: 14662 corp: 28/69b lim: 5 exec/s: 29 rss: 70Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:40.098 [2024-11-09 17:16:59.686305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.098 [2024-11-09 17:16:59.686330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.098 [2024-11-09 17:16:59.686385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.098 [2024-11-09 17:16:59.686398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.098 [2024-11-09 17:16:59.686454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.098 [2024-11-09 17:16:59.686471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.098 [2024-11-09 17:16:59.686527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.098 [2024-11-09 17:16:59.686540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.098 [2024-11-09 17:16:59.686593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.098 [2024-11-09 17:16:59.686606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:40.098 #30 NEW cov: 11784 ft: 14689 corp: 29/74b lim: 5 exec/s: 30 rss: 70Mb L: 5/5 MS: 1 CopyPart- 00:07:40.098 [2024-11-09 17:16:59.725785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.098 [2024-11-09 17:16:59.725810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.098 #31 NEW cov: 11784 ft: 14694 corp: 30/75b lim: 5 exec/s: 31 rss: 70Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:40.098 [2024-11-09 17:16:59.766042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.098 [2024-11-09 17:16:59.766066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.098 [2024-11-09 17:16:59.766122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.098 [2024-11-09 17:16:59.766136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.098 #32 NEW cov: 11784 ft: 14732 corp: 31/77b lim: 5 exec/s: 32 rss: 70Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:40.098 [2024-11-09 17:16:59.806654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.098 [2024-11-09 17:16:59.806679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.098 [2024-11-09 17:16:59.806733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.098 [2024-11-09 17:16:59.806746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.098 [2024-11-09 17:16:59.806799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.098 [2024-11-09 17:16:59.806813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.098 [2024-11-09 17:16:59.806865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.098 [2024-11-09 17:16:59.806879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.098 [2024-11-09 17:16:59.806932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.098 [2024-11-09 17:16:59.806945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:40.098 #33 NEW cov: 11784 ft: 14745 corp: 32/82b lim: 5 exec/s: 33 rss: 70Mb L: 5/5 MS: 1 CopyPart- 00:07:40.098 [2024-11-09 17:16:59.846452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.098 [2024-11-09 17:16:59.846477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.098 [2024-11-09 17:16:59.846537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.098 [2024-11-09 17:16:59.846551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.098 [2024-11-09 17:16:59.846608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.098 [2024-11-09 17:16:59.846621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.357 #34 NEW cov: 11784 ft: 14787 corp: 33/85b lim: 5 exec/s: 34 rss: 70Mb L: 3/5 MS: 1 EraseBytes- 00:07:40.357 [2024-11-09 17:16:59.886234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.357 [2024-11-09 17:16:59.886258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.357 #35 NEW cov: 11784 ft: 14802 corp: 34/86b lim: 5 exec/s: 35 rss: 70Mb L: 1/5 MS: 1 EraseBytes- 00:07:40.357 [2024-11-09 17:16:59.926380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.357 [2024-11-09 17:16:59.926405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.357 #36 NEW cov: 11784 ft: 14816 corp: 35/87b lim: 5 exec/s: 36 rss: 70Mb L: 1/5 MS: 1 ChangeByte- 00:07:40.357 [2024-11-09 17:16:59.966520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.357 [2024-11-09 17:16:59.966548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.357 #37 NEW cov: 11784 ft: 14880 corp: 36/88b lim: 5 exec/s: 37 rss: 70Mb L: 1/5 MS: 1 ChangeBit- 00:07:40.357 [2024-11-09 17:17:00.006613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.357 [2024-11-09 17:17:00.006639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.357 #38 NEW cov: 11784 ft: 14884 corp: 37/89b lim: 5 exec/s: 38 rss: 70Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:40.357 [2024-11-09 17:17:00.046919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.357 [2024-11-09 17:17:00.046946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.357 [2024-11-09 17:17:00.047004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.357 [2024-11-09 17:17:00.047018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.357 #39 NEW cov: 11784 ft: 14914 corp: 38/91b lim: 5 exec/s: 39 rss: 70Mb L: 2/5 MS: 1 EraseBytes- 00:07:40.357 [2024-11-09 17:17:00.097507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.357 [2024-11-09 17:17:00.097548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.357 [2024-11-09 17:17:00.097613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.357 [2024-11-09 17:17:00.097627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.357 [2024-11-09 17:17:00.097683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.357 [2024-11-09 17:17:00.097696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.357 [2024-11-09 17:17:00.097753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.357 [2024-11-09 17:17:00.097766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.357 [2024-11-09 17:17:00.097823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.357 [2024-11-09 17:17:00.097837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:40.617 #40 NEW cov: 11784 ft: 14925 corp: 39/96b lim: 5 exec/s: 40 rss: 70Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:40.617 [2024-11-09 17:17:00.147644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.617 [2024-11-09 17:17:00.147680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.617 [2024-11-09 17:17:00.147754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.617 [2024-11-09 17:17:00.147767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.617 [2024-11-09 17:17:00.147815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.617 [2024-11-09 17:17:00.147828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.617 [2024-11-09 17:17:00.147884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.617 [2024-11-09 17:17:00.147897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.617 [2024-11-09 17:17:00.147953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.617 [2024-11-09 17:17:00.147967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:40.617 #41 NEW cov: 11784 ft: 14935 corp: 40/101b lim: 5 exec/s: 41 rss: 70Mb L: 5/5 MS: 1 CopyPart- 00:07:40.617 [2024-11-09 17:17:00.197656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.617 [2024-11-09 17:17:00.197681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.617 [2024-11-09 17:17:00.197752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.617 [2024-11-09 17:17:00.197767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.617 [2024-11-09 17:17:00.197823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.617 [2024-11-09 17:17:00.197836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.617 [2024-11-09 17:17:00.197891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.617 [2024-11-09 17:17:00.197904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.617 #42 NEW cov: 11784 ft: 14947 corp: 41/105b lim: 5 exec/s: 42 rss: 70Mb L: 4/5 MS: 1 ShuffleBytes- 00:07:40.617 [2024-11-09 17:17:00.237924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.617 [2024-11-09 17:17:00.237949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.617 [2024-11-09 17:17:00.238021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.617 [2024-11-09 17:17:00.238035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.617 [2024-11-09 17:17:00.238091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.617 [2024-11-09 17:17:00.238104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.617 [2024-11-09 17:17:00.238161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.617 [2024-11-09 17:17:00.238174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.617 [2024-11-09 17:17:00.238229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.617 [2024-11-09 17:17:00.238242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:40.617 #43 NEW cov: 11784 ft: 15020 corp: 42/110b lim: 5 exec/s: 43 rss: 70Mb L: 5/5 MS: 1 InsertByte- 00:07:40.617 [2024-11-09 17:17:00.277864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.617 [2024-11-09 17:17:00.277889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.617 [2024-11-09 17:17:00.277945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.617 [2024-11-09 17:17:00.277959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.617 [2024-11-09 17:17:00.278017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.617 [2024-11-09 17:17:00.278030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.617 [2024-11-09 17:17:00.278085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.617 [2024-11-09 17:17:00.278098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.617 #44 NEW cov: 11784 ft: 15033 corp: 43/114b lim: 5 exec/s: 44 rss: 70Mb L: 4/5 MS: 1 ChangeByte- 00:07:40.617 [2024-11-09 17:17:00.317623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.617 [2024-11-09 17:17:00.317648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.617 [2024-11-09 17:17:00.317706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.617 [2024-11-09 17:17:00.317719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.617 #45 NEW cov: 11784 ft: 15038 corp: 44/116b lim: 5 exec/s: 22 rss: 70Mb L: 2/5 MS: 1 CopyPart- 00:07:40.617 #45 DONE cov: 11784 ft: 15038 corp: 44/116b lim: 5 exec/s: 22 rss: 70Mb 00:07:40.617 Done 45 runs in 2 second(s) 00:07:40.877 17:17:00 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_8.conf 00:07:40.877 17:17:00 -- ../common.sh@72 -- # (( i++ )) 00:07:40.877 17:17:00 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:40.877 17:17:00 -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:07:40.877 17:17:00 -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:07:40.877 17:17:00 -- nvmf/run.sh@24 -- # local timen=1 00:07:40.877 17:17:00 -- nvmf/run.sh@25 -- # local core=0x1 00:07:40.877 17:17:00 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:40.877 17:17:00 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:07:40.877 17:17:00 -- nvmf/run.sh@29 -- # printf %02d 9 00:07:40.877 17:17:00 -- nvmf/run.sh@29 -- # port=4409 00:07:40.877 17:17:00 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:40.877 17:17:00 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:07:40.877 17:17:00 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:40.877 17:17:00 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 -r /var/tmp/spdk9.sock 00:07:40.877 [2024-11-09 17:17:00.508920] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:40.877 [2024-11-09 17:17:00.508990] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid369696 ] 00:07:40.877 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.136 [2024-11-09 17:17:00.691510] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.136 [2024-11-09 17:17:00.755588] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:41.136 [2024-11-09 17:17:00.755731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.136 [2024-11-09 17:17:00.813768] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.137 [2024-11-09 17:17:00.830146] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:07:41.137 INFO: Running with entropic power schedule (0xFF, 100). 00:07:41.137 INFO: Seed: 1704140265 00:07:41.137 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:41.137 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:41.137 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:41.137 INFO: A corpus is not provided, starting from an empty corpus 00:07:41.137 [2024-11-09 17:17:00.899833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.137 [2024-11-09 17:17:00.899874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.396 #2 INITED cov: 11553 ft: 11554 corp: 1/1b exec/s: 0 rss: 66Mb 00:07:41.396 [2024-11-09 17:17:00.939482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.396 [2024-11-09 17:17:00.939512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.655 NEW_FUNC[1/1]: 0xf6cdf8 in spdk_sock_prep_reqs /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk_internal/sock.h:284 00:07:41.655 #3 NEW cov: 11670 ft: 12126 corp: 2/2b lim: 5 exec/s: 0 rss: 68Mb L: 1/1 MS: 1 CrossOver- 00:07:41.655 [2024-11-09 17:17:01.271780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.655 [2024-11-09 17:17:01.271815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.655 [2024-11-09 17:17:01.271943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.655 [2024-11-09 17:17:01.271959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.655 [2024-11-09 17:17:01.272080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.655 [2024-11-09 17:17:01.272096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.655 [2024-11-09 17:17:01.272224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.655 [2024-11-09 17:17:01.272239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.655 [2024-11-09 17:17:01.272353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.655 [2024-11-09 17:17:01.272368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:41.655 #4 NEW cov: 11676 ft: 13264 corp: 3/7b lim: 5 exec/s: 0 rss: 68Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:41.655 [2024-11-09 17:17:01.330801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.655 [2024-11-09 17:17:01.330832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.655 #5 NEW cov: 11761 ft: 13544 corp: 4/8b lim: 5 exec/s: 0 rss: 68Mb L: 1/5 MS: 1 ChangeBit- 00:07:41.655 [2024-11-09 17:17:01.381070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.655 [2024-11-09 17:17:01.381101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.655 #6 NEW cov: 11761 ft: 13638 corp: 5/9b lim: 5 exec/s: 0 rss: 68Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:41.655 [2024-11-09 17:17:01.421914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.655 [2024-11-09 17:17:01.421943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.655 [2024-11-09 17:17:01.422072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.655 [2024-11-09 17:17:01.422094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.655 [2024-11-09 17:17:01.422217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.655 [2024-11-09 17:17:01.422233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.655 [2024-11-09 17:17:01.422357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.655 [2024-11-09 17:17:01.422374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.655 [2024-11-09 17:17:01.422439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.655 [2024-11-09 17:17:01.422456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:41.915 #7 NEW cov: 11761 ft: 13738 corp: 6/14b lim: 5 exec/s: 0 rss: 68Mb L: 5/5 MS: 1 ChangeBit- 00:07:41.915 [2024-11-09 17:17:01.481308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.915 [2024-11-09 17:17:01.481337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.915 #8 NEW cov: 11761 ft: 13790 corp: 7/15b lim: 5 exec/s: 0 rss: 68Mb L: 1/5 MS: 1 CopyPart- 00:07:41.915 [2024-11-09 17:17:01.542796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.915 [2024-11-09 17:17:01.542826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.915 [2024-11-09 17:17:01.542956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.915 [2024-11-09 17:17:01.542973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.915 [2024-11-09 17:17:01.543094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.915 [2024-11-09 17:17:01.543109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.915 [2024-11-09 17:17:01.543200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.915 [2024-11-09 17:17:01.543216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.915 [2024-11-09 17:17:01.543340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.915 [2024-11-09 17:17:01.543356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:41.915 #9 NEW cov: 11761 ft: 13916 corp: 8/20b lim: 5 exec/s: 0 rss: 68Mb L: 5/5 MS: 1 CrossOver- 00:07:41.915 [2024-11-09 17:17:01.602875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.915 [2024-11-09 17:17:01.602903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.915 [2024-11-09 17:17:01.603025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.915 [2024-11-09 17:17:01.603045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.915 [2024-11-09 17:17:01.603166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.915 [2024-11-09 17:17:01.603183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.915 [2024-11-09 17:17:01.603309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.915 [2024-11-09 17:17:01.603326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.915 [2024-11-09 17:17:01.603438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.915 [2024-11-09 17:17:01.603455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:41.915 #10 NEW cov: 11761 ft: 13969 corp: 9/25b lim: 5 exec/s: 0 rss: 68Mb L: 5/5 MS: 1 ChangeBinInt- 00:07:41.915 [2024-11-09 17:17:01.653029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.915 [2024-11-09 17:17:01.653056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.915 [2024-11-09 17:17:01.653184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.915 [2024-11-09 17:17:01.653202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.915 [2024-11-09 17:17:01.653328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.915 [2024-11-09 17:17:01.653344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.915 [2024-11-09 17:17:01.653475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.915 [2024-11-09 17:17:01.653490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.915 [2024-11-09 17:17:01.653615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.915 [2024-11-09 17:17:01.653632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:41.915 #11 NEW cov: 11761 ft: 13978 corp: 10/30b lim: 5 exec/s: 0 rss: 68Mb L: 5/5 MS: 1 ShuffleBytes- 00:07:42.175 [2024-11-09 17:17:01.692173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.175 [2024-11-09 17:17:01.692200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.175 [2024-11-09 17:17:01.692329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.175 [2024-11-09 17:17:01.692345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.175 [2024-11-09 17:17:01.692467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.175 [2024-11-09 17:17:01.692485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.175 #12 NEW cov: 11761 ft: 14214 corp: 11/33b lim: 5 exec/s: 0 rss: 68Mb L: 3/5 MS: 1 EraseBytes- 00:07:42.175 [2024-11-09 17:17:01.732164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.175 [2024-11-09 17:17:01.732191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.175 #13 NEW cov: 11761 ft: 14285 corp: 12/34b lim: 5 exec/s: 0 rss: 68Mb L: 1/5 MS: 1 ChangeBit- 00:07:42.175 [2024-11-09 17:17:01.773213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.175 [2024-11-09 17:17:01.773241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.175 [2024-11-09 17:17:01.773359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.175 [2024-11-09 17:17:01.773377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.175 [2024-11-09 17:17:01.773499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.175 [2024-11-09 17:17:01.773517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.175 [2024-11-09 17:17:01.773616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.175 [2024-11-09 17:17:01.773632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.175 [2024-11-09 17:17:01.773760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.175 [2024-11-09 17:17:01.773775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:42.175 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:42.175 #14 NEW cov: 11784 ft: 14316 corp: 13/39b lim: 5 exec/s: 0 rss: 69Mb L: 5/5 MS: 1 ShuffleBytes- 00:07:42.175 [2024-11-09 17:17:01.822655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.175 [2024-11-09 17:17:01.822682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.175 [2024-11-09 17:17:01.822806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.175 [2024-11-09 17:17:01.822833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.175 #15 NEW cov: 11784 ft: 14489 corp: 14/41b lim: 5 exec/s: 0 rss: 69Mb L: 2/5 MS: 1 CrossOver- 00:07:42.175 [2024-11-09 17:17:01.862345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.175 [2024-11-09 17:17:01.862372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.175 [2024-11-09 17:17:01.862497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.175 [2024-11-09 17:17:01.862514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.175 #16 NEW cov: 11784 ft: 14514 corp: 15/43b lim: 5 exec/s: 16 rss: 69Mb L: 2/5 MS: 1 InsertByte- 00:07:42.175 [2024-11-09 17:17:01.913011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.175 [2024-11-09 17:17:01.913038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.175 [2024-11-09 17:17:01.913167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.175 [2024-11-09 17:17:01.913183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.175 #17 NEW cov: 11784 ft: 14540 corp: 16/45b lim: 5 exec/s: 17 rss: 69Mb L: 2/5 MS: 1 CopyPart- 00:07:42.435 [2024-11-09 17:17:01.964067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.435 [2024-11-09 17:17:01.964095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.435 [2024-11-09 17:17:01.964219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.435 [2024-11-09 17:17:01.964236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.435 [2024-11-09 17:17:01.964358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.435 [2024-11-09 17:17:01.964375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.435 [2024-11-09 17:17:01.964506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.435 [2024-11-09 17:17:01.964525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.435 [2024-11-09 17:17:01.964662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.435 [2024-11-09 17:17:01.964678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:42.435 #18 NEW cov: 11784 ft: 14566 corp: 17/50b lim: 5 exec/s: 18 rss: 69Mb L: 5/5 MS: 1 ChangeByte- 00:07:42.435 [2024-11-09 17:17:02.024247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.435 [2024-11-09 17:17:02.024274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.435 [2024-11-09 17:17:02.024401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.435 [2024-11-09 17:17:02.024416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.435 [2024-11-09 17:17:02.024541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.435 [2024-11-09 17:17:02.024569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.435 [2024-11-09 17:17:02.024696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.435 [2024-11-09 17:17:02.024715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.435 [2024-11-09 17:17:02.024844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.435 [2024-11-09 17:17:02.024859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:42.435 #19 NEW cov: 11784 ft: 14585 corp: 18/55b lim: 5 exec/s: 19 rss: 69Mb L: 5/5 MS: 1 ChangeBit- 00:07:42.435 [2024-11-09 17:17:02.083246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.435 [2024-11-09 17:17:02.083274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.435 #20 NEW cov: 11784 ft: 14587 corp: 19/56b lim: 5 exec/s: 20 rss: 69Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:42.435 [2024-11-09 17:17:02.133458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.435 [2024-11-09 17:17:02.133483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.435 #21 NEW cov: 11784 ft: 14801 corp: 20/57b lim: 5 exec/s: 21 rss: 69Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:42.435 [2024-11-09 17:17:02.184775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.435 [2024-11-09 17:17:02.184804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.435 [2024-11-09 17:17:02.184929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.435 [2024-11-09 17:17:02.184947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.435 [2024-11-09 17:17:02.185073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.435 [2024-11-09 17:17:02.185089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.435 [2024-11-09 17:17:02.185216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.435 [2024-11-09 17:17:02.185233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.435 [2024-11-09 17:17:02.185361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.435 [2024-11-09 17:17:02.185378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:42.695 #22 NEW cov: 11784 ft: 14840 corp: 21/62b lim: 5 exec/s: 22 rss: 69Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:42.695 [2024-11-09 17:17:02.234620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.695 [2024-11-09 17:17:02.234649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.695 [2024-11-09 17:17:02.234775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.695 [2024-11-09 17:17:02.234794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.695 [2024-11-09 17:17:02.234922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.695 [2024-11-09 17:17:02.234937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.695 [2024-11-09 17:17:02.235062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.695 [2024-11-09 17:17:02.235080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.695 #23 NEW cov: 11784 ft: 14858 corp: 22/66b lim: 5 exec/s: 23 rss: 69Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:07:42.695 [2024-11-09 17:17:02.295084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.695 [2024-11-09 17:17:02.295112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.695 [2024-11-09 17:17:02.295239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.695 [2024-11-09 17:17:02.295257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.695 [2024-11-09 17:17:02.295374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.695 [2024-11-09 17:17:02.295391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.695 [2024-11-09 17:17:02.295511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.695 [2024-11-09 17:17:02.295532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.695 [2024-11-09 17:17:02.295657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.695 [2024-11-09 17:17:02.295674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:42.695 #24 NEW cov: 11784 ft: 14882 corp: 23/71b lim: 5 exec/s: 24 rss: 69Mb L: 5/5 MS: 1 ChangeBit- 00:07:42.695 [2024-11-09 17:17:02.355287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.695 [2024-11-09 17:17:02.355314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.695 [2024-11-09 17:17:02.355440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.695 [2024-11-09 17:17:02.355457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.695 [2024-11-09 17:17:02.355577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.695 [2024-11-09 17:17:02.355594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.695 [2024-11-09 17:17:02.355724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.695 [2024-11-09 17:17:02.355743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.695 [2024-11-09 17:17:02.355877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.695 [2024-11-09 17:17:02.355900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:42.695 #25 NEW cov: 11784 ft: 14886 corp: 24/76b lim: 5 exec/s: 25 rss: 69Mb L: 5/5 MS: 1 ChangeByte- 00:07:42.695 [2024-11-09 17:17:02.404650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.695 [2024-11-09 17:17:02.404678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.695 [2024-11-09 17:17:02.404806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.695 [2024-11-09 17:17:02.404824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.695 #26 NEW cov: 11784 ft: 14940 corp: 25/78b lim: 5 exec/s: 26 rss: 69Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:42.954 [2024-11-09 17:17:02.465166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.954 [2024-11-09 17:17:02.465195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.954 [2024-11-09 17:17:02.465329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.954 [2024-11-09 17:17:02.465346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.954 [2024-11-09 17:17:02.465467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.955 [2024-11-09 17:17:02.465485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.955 #27 NEW cov: 11784 ft: 14952 corp: 26/81b lim: 5 exec/s: 27 rss: 69Mb L: 3/5 MS: 1 EraseBytes- 00:07:42.955 [2024-11-09 17:17:02.504782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.955 [2024-11-09 17:17:02.504810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.955 [2024-11-09 17:17:02.504940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.955 [2024-11-09 17:17:02.504957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.955 [2024-11-09 17:17:02.505083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.955 [2024-11-09 17:17:02.505099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.955 #28 NEW cov: 11784 ft: 14978 corp: 27/84b lim: 5 exec/s: 28 rss: 69Mb L: 3/5 MS: 1 ShuffleBytes- 00:07:42.955 [2024-11-09 17:17:02.565170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.955 [2024-11-09 17:17:02.565198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.955 [2024-11-09 17:17:02.565327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.955 [2024-11-09 17:17:02.565349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.955 #29 NEW cov: 11784 ft: 14995 corp: 28/86b lim: 5 exec/s: 29 rss: 69Mb L: 2/5 MS: 1 InsertByte- 00:07:42.955 [2024-11-09 17:17:02.615237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.955 [2024-11-09 17:17:02.615266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.955 [2024-11-09 17:17:02.615400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.955 [2024-11-09 17:17:02.615419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.955 #30 NEW cov: 11784 ft: 15011 corp: 29/88b lim: 5 exec/s: 30 rss: 69Mb L: 2/5 MS: 1 ChangeBit- 00:07:42.955 [2024-11-09 17:17:02.676324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.955 [2024-11-09 17:17:02.676352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.955 [2024-11-09 17:17:02.676481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.955 [2024-11-09 17:17:02.676499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.955 [2024-11-09 17:17:02.676626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.955 [2024-11-09 17:17:02.676642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.955 [2024-11-09 17:17:02.676774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.955 [2024-11-09 17:17:02.676790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.955 [2024-11-09 17:17:02.676913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.955 [2024-11-09 17:17:02.676931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:42.955 #31 NEW cov: 11784 ft: 15042 corp: 30/93b lim: 5 exec/s: 31 rss: 70Mb L: 5/5 MS: 1 ChangeByte- 00:07:43.215 [2024-11-09 17:17:02.736204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.215 [2024-11-09 17:17:02.736231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.215 [2024-11-09 17:17:02.736373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.215 [2024-11-09 17:17:02.736391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.215 [2024-11-09 17:17:02.736518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.215 [2024-11-09 17:17:02.736539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.215 [2024-11-09 17:17:02.736661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.215 [2024-11-09 17:17:02.736683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.215 #32 NEW cov: 11784 ft: 15043 corp: 31/97b lim: 5 exec/s: 32 rss: 70Mb L: 4/5 MS: 1 EraseBytes- 00:07:43.215 [2024-11-09 17:17:02.785319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.215 [2024-11-09 17:17:02.785348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.215 #33 NEW cov: 11784 ft: 15054 corp: 32/98b lim: 5 exec/s: 33 rss: 70Mb L: 1/5 MS: 1 ChangeByte- 00:07:43.215 [2024-11-09 17:17:02.836789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.215 [2024-11-09 17:17:02.836816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.215 [2024-11-09 17:17:02.836954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.215 [2024-11-09 17:17:02.836969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.215 [2024-11-09 17:17:02.837089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.215 [2024-11-09 17:17:02.837107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.215 [2024-11-09 17:17:02.837205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.215 [2024-11-09 17:17:02.837220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.215 [2024-11-09 17:17:02.837345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.215 [2024-11-09 17:17:02.837360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:43.215 #34 NEW cov: 11784 ft: 15101 corp: 33/103b lim: 5 exec/s: 34 rss: 70Mb L: 5/5 MS: 1 CrossOver- 00:07:43.215 [2024-11-09 17:17:02.886095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.215 [2024-11-09 17:17:02.886123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.215 [2024-11-09 17:17:02.886248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.215 [2024-11-09 17:17:02.886276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.215 #35 NEW cov: 11784 ft: 15123 corp: 34/105b lim: 5 exec/s: 17 rss: 70Mb L: 2/5 MS: 1 ChangeByte- 00:07:43.215 #35 DONE cov: 11784 ft: 15123 corp: 34/105b lim: 5 exec/s: 17 rss: 70Mb 00:07:43.215 Done 35 runs in 2 second(s) 00:07:43.475 17:17:03 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_9.conf 00:07:43.475 17:17:03 -- ../common.sh@72 -- # (( i++ )) 00:07:43.475 17:17:03 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:43.475 17:17:03 -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:07:43.475 17:17:03 -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:07:43.475 17:17:03 -- nvmf/run.sh@24 -- # local timen=1 00:07:43.475 17:17:03 -- nvmf/run.sh@25 -- # local core=0x1 00:07:43.475 17:17:03 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:43.475 17:17:03 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:07:43.475 17:17:03 -- nvmf/run.sh@29 -- # printf %02d 10 00:07:43.475 17:17:03 -- nvmf/run.sh@29 -- # port=4410 00:07:43.475 17:17:03 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:43.475 17:17:03 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:07:43.475 17:17:03 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:43.475 17:17:03 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 -r /var/tmp/spdk10.sock 00:07:43.475 [2024-11-09 17:17:03.074898] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:43.475 [2024-11-09 17:17:03.074991] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid370239 ] 00:07:43.475 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.734 [2024-11-09 17:17:03.250973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.734 [2024-11-09 17:17:03.314147] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:43.734 [2024-11-09 17:17:03.314288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.734 [2024-11-09 17:17:03.372539] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.734 [2024-11-09 17:17:03.388852] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:07:43.734 INFO: Running with entropic power schedule (0xFF, 100). 00:07:43.734 INFO: Seed: 4265134517 00:07:43.734 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:43.734 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:43.734 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:43.734 INFO: A corpus is not provided, starting from an empty corpus 00:07:43.734 #2 INITED exec/s: 0 rss: 60Mb 00:07:43.734 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:43.734 This may also happen if the target rejected all inputs we tried so far 00:07:43.734 [2024-11-09 17:17:03.454393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.734 [2024-11-09 17:17:03.454421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.735 [2024-11-09 17:17:03.454478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.735 [2024-11-09 17:17:03.454491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.735 [2024-11-09 17:17:03.454565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.735 [2024-11-09 17:17:03.454578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.735 [2024-11-09 17:17:03.454631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.735 [2024-11-09 17:17:03.454644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.994 NEW_FUNC[1/670]: 0x447688 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:07:43.994 NEW_FUNC[2/670]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:43.994 #3 NEW cov: 11580 ft: 11581 corp: 2/36b lim: 40 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:44.254 [2024-11-09 17:17:03.775251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.254 [2024-11-09 17:17:03.775287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.254 [2024-11-09 17:17:03.775345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.254 [2024-11-09 17:17:03.775361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.254 [2024-11-09 17:17:03.775417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.254 [2024-11-09 17:17:03.775432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.254 [2024-11-09 17:17:03.775490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:17171717 cdw11:17172d17 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.254 [2024-11-09 17:17:03.775504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.254 #9 NEW cov: 11693 ft: 12037 corp: 3/72b lim: 40 exec/s: 0 rss: 68Mb L: 36/36 MS: 1 InsertByte- 00:07:44.254 [2024-11-09 17:17:03.825012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.254 [2024-11-09 17:17:03.825037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.254 [2024-11-09 17:17:03.825092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.254 [2024-11-09 17:17:03.825105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.255 #10 NEW cov: 11699 ft: 12767 corp: 4/93b lim: 40 exec/s: 0 rss: 68Mb L: 21/36 MS: 1 EraseBytes- 00:07:44.255 [2024-11-09 17:17:03.865397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.255 [2024-11-09 17:17:03.865422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.255 [2024-11-09 17:17:03.865477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.255 [2024-11-09 17:17:03.865490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.255 [2024-11-09 17:17:03.865546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.255 [2024-11-09 17:17:03.865559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.255 [2024-11-09 17:17:03.865628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:17171717 cdw11:17172d17 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.255 [2024-11-09 17:17:03.865641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.255 #11 NEW cov: 11784 ft: 13042 corp: 5/129b lim: 40 exec/s: 0 rss: 68Mb L: 36/36 MS: 1 ChangeBit- 00:07:44.255 [2024-11-09 17:17:03.905274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.255 [2024-11-09 17:17:03.905301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.255 [2024-11-09 17:17:03.905356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171a17 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.255 [2024-11-09 17:17:03.905370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.255 #12 NEW cov: 11784 ft: 13263 corp: 6/152b lim: 40 exec/s: 0 rss: 68Mb L: 23/36 MS: 1 CrossOver- 00:07:44.255 [2024-11-09 17:17:03.945607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.255 [2024-11-09 17:17:03.945632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.255 [2024-11-09 17:17:03.945686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.255 [2024-11-09 17:17:03.945700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.255 [2024-11-09 17:17:03.945755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.255 [2024-11-09 17:17:03.945768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.255 [2024-11-09 17:17:03.945820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:17171717 cdw11:17172d17 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.255 [2024-11-09 17:17:03.945833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.255 #13 NEW cov: 11784 ft: 13369 corp: 7/188b lim: 40 exec/s: 0 rss: 68Mb L: 36/36 MS: 1 ShuffleBytes- 00:07:44.255 [2024-11-09 17:17:03.985884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.255 [2024-11-09 17:17:03.985910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.255 [2024-11-09 17:17:03.985980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.255 [2024-11-09 17:17:03.985994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.255 [2024-11-09 17:17:03.986049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.255 [2024-11-09 17:17:03.986062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.255 [2024-11-09 17:17:03.986117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.255 [2024-11-09 17:17:03.986130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.255 [2024-11-09 17:17:03.986184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.255 [2024-11-09 17:17:03.986197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.255 #14 NEW cov: 11784 ft: 13431 corp: 8/228b lim: 40 exec/s: 0 rss: 68Mb L: 40/40 MS: 1 CopyPart- 00:07:44.515 [2024-11-09 17:17:04.025995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.515 [2024-11-09 17:17:04.026024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.515 [2024-11-09 17:17:04.026078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.515 [2024-11-09 17:17:04.026092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.515 [2024-11-09 17:17:04.026144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.515 [2024-11-09 17:17:04.026157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.515 [2024-11-09 17:17:04.026210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.515 [2024-11-09 17:17:04.026222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.515 [2024-11-09 17:17:04.026275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.515 [2024-11-09 17:17:04.026288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.515 #15 NEW cov: 11784 ft: 13490 corp: 9/268b lim: 40 exec/s: 0 rss: 69Mb L: 40/40 MS: 1 CrossOver- 00:07:44.515 [2024-11-09 17:17:04.065966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.515 [2024-11-09 17:17:04.065991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.515 [2024-11-09 17:17:04.066061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.515 [2024-11-09 17:17:04.066074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.515 [2024-11-09 17:17:04.066129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.515 [2024-11-09 17:17:04.066142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.515 [2024-11-09 17:17:04.066195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:17171717 cdw11:17172d17 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.515 [2024-11-09 17:17:04.066208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.515 #16 NEW cov: 11784 ft: 13595 corp: 10/304b lim: 40 exec/s: 0 rss: 69Mb L: 36/40 MS: 1 ChangeBit- 00:07:44.515 [2024-11-09 17:17:04.106109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.515 [2024-11-09 17:17:04.106135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.515 [2024-11-09 17:17:04.106189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.515 [2024-11-09 17:17:04.106203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.515 [2024-11-09 17:17:04.106258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.515 [2024-11-09 17:17:04.106273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.515 [2024-11-09 17:17:04.106328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.515 [2024-11-09 17:17:04.106341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.515 #22 NEW cov: 11784 ft: 13724 corp: 11/339b lim: 40 exec/s: 0 rss: 69Mb L: 35/40 MS: 1 ShuffleBytes- 00:07:44.515 [2024-11-09 17:17:04.145929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17e9e417 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.515 [2024-11-09 17:17:04.145954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.515 [2024-11-09 17:17:04.146010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171a17 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.515 [2024-11-09 17:17:04.146023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.515 #23 NEW cov: 11784 ft: 13794 corp: 12/362b lim: 40 exec/s: 0 rss: 69Mb L: 23/40 MS: 1 ChangeBinInt- 00:07:44.515 [2024-11-09 17:17:04.186050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.515 [2024-11-09 17:17:04.186074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.515 [2024-11-09 17:17:04.186127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17172d17 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.515 [2024-11-09 17:17:04.186141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.515 #24 NEW cov: 11784 ft: 13810 corp: 13/385b lim: 40 exec/s: 0 rss: 69Mb L: 23/40 MS: 1 CrossOver- 00:07:44.515 [2024-11-09 17:17:04.226180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.515 [2024-11-09 17:17:04.226205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.515 [2024-11-09 17:17:04.226257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.515 [2024-11-09 17:17:04.226271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.515 #25 NEW cov: 11784 ft: 13815 corp: 14/406b lim: 40 exec/s: 0 rss: 69Mb L: 21/40 MS: 1 ChangeByte- 00:07:44.515 [2024-11-09 17:17:04.266168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.515 [2024-11-09 17:17:04.266193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.775 #26 NEW cov: 11784 ft: 14176 corp: 15/418b lim: 40 exec/s: 0 rss: 69Mb L: 12/40 MS: 1 EraseBytes- 00:07:44.775 [2024-11-09 17:17:04.306391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17e9e417 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.775 [2024-11-09 17:17:04.306415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.775 [2024-11-09 17:17:04.306486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171217 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.775 [2024-11-09 17:17:04.306502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.775 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:44.775 #27 NEW cov: 11807 ft: 14221 corp: 16/441b lim: 40 exec/s: 0 rss: 69Mb L: 23/40 MS: 1 ChangeBit- 00:07:44.775 [2024-11-09 17:17:04.346475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:63636363 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.775 [2024-11-09 17:17:04.346499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.775 [2024-11-09 17:17:04.346573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:63636363 cdw11:6363630a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.775 [2024-11-09 17:17:04.346587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.775 #28 NEW cov: 11807 ft: 14236 corp: 17/457b lim: 40 exec/s: 0 rss: 69Mb L: 16/40 MS: 1 InsertRepeatedBytes- 00:07:44.775 [2024-11-09 17:17:04.386953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.775 [2024-11-09 17:17:04.386977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.775 [2024-11-09 17:17:04.387032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.775 [2024-11-09 17:17:04.387045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.775 [2024-11-09 17:17:04.387098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.776 [2024-11-09 17:17:04.387110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.776 [2024-11-09 17:17:04.387163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.776 [2024-11-09 17:17:04.387176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.776 [2024-11-09 17:17:04.387229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.776 [2024-11-09 17:17:04.387241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.776 #29 NEW cov: 11807 ft: 14305 corp: 18/497b lim: 40 exec/s: 0 rss: 69Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:44.776 [2024-11-09 17:17:04.426717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:63636363 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.776 [2024-11-09 17:17:04.426741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.776 [2024-11-09 17:17:04.426793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:63636363 cdw11:6363010e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.776 [2024-11-09 17:17:04.426806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.776 #30 NEW cov: 11807 ft: 14327 corp: 19/515b lim: 40 exec/s: 30 rss: 69Mb L: 18/40 MS: 1 CMP- DE: "\001\016"- 00:07:44.776 [2024-11-09 17:17:04.467156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.776 [2024-11-09 17:17:04.467181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.776 [2024-11-09 17:17:04.467239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.776 [2024-11-09 17:17:04.467253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.776 [2024-11-09 17:17:04.467307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.776 [2024-11-09 17:17:04.467320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.776 [2024-11-09 17:17:04.467372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.776 [2024-11-09 17:17:04.467385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.776 #31 NEW cov: 11807 ft: 14337 corp: 20/550b lim: 40 exec/s: 31 rss: 69Mb L: 35/40 MS: 1 ShuffleBytes- 00:07:44.776 [2024-11-09 17:17:04.506989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:6363636f cdw11:63636363 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.776 [2024-11-09 17:17:04.507013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.776 [2024-11-09 17:17:04.507070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:63636363 cdw11:6363630a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.776 [2024-11-09 17:17:04.507083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.776 #32 NEW cov: 11807 ft: 14354 corp: 21/566b lim: 40 exec/s: 32 rss: 69Mb L: 16/40 MS: 1 ChangeByte- 00:07:45.036 [2024-11-09 17:17:04.547120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.036 [2024-11-09 17:17:04.547144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.036 [2024-11-09 17:17:04.547199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:1e171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.036 [2024-11-09 17:17:04.547211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.036 #33 NEW cov: 11807 ft: 14371 corp: 22/587b lim: 40 exec/s: 33 rss: 69Mb L: 21/40 MS: 1 ChangeBinInt- 00:07:45.036 [2024-11-09 17:17:04.587466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.036 [2024-11-09 17:17:04.587490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.036 [2024-11-09 17:17:04.587546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17172d17 cdw11:17171701 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.036 [2024-11-09 17:17:04.587576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.036 [2024-11-09 17:17:04.587632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:01010101 cdw11:01010101 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.036 [2024-11-09 17:17:04.587645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.036 [2024-11-09 17:17:04.587700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:01171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.036 [2024-11-09 17:17:04.587715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.036 #34 NEW cov: 11807 ft: 14387 corp: 23/620b lim: 40 exec/s: 34 rss: 69Mb L: 33/40 MS: 1 InsertRepeatedBytes- 00:07:45.036 [2024-11-09 17:17:04.627356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.037 [2024-11-09 17:17:04.627380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.037 [2024-11-09 17:17:04.627433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.037 [2024-11-09 17:17:04.627446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.037 #35 NEW cov: 11807 ft: 14400 corp: 24/643b lim: 40 exec/s: 35 rss: 69Mb L: 23/40 MS: 1 EraseBytes- 00:07:45.037 [2024-11-09 17:17:04.667791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.037 [2024-11-09 17:17:04.667815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.037 [2024-11-09 17:17:04.667885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171717 cdw11:17eee817 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.037 [2024-11-09 17:17:04.667898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.037 [2024-11-09 17:17:04.667950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.037 [2024-11-09 17:17:04.667963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.037 [2024-11-09 17:17:04.668015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.037 [2024-11-09 17:17:04.668027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.037 [2024-11-09 17:17:04.668080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.037 [2024-11-09 17:17:04.668092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:45.037 #36 NEW cov: 11807 ft: 14441 corp: 25/683b lim: 40 exec/s: 36 rss: 69Mb L: 40/40 MS: 1 ChangeBinInt- 00:07:45.037 [2024-11-09 17:17:04.707570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.037 [2024-11-09 17:17:04.707596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.037 [2024-11-09 17:17:04.707650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.037 [2024-11-09 17:17:04.707663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.037 #37 NEW cov: 11807 ft: 14445 corp: 26/701b lim: 40 exec/s: 37 rss: 69Mb L: 18/40 MS: 1 EraseBytes- 00:07:45.037 [2024-11-09 17:17:04.747675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.037 [2024-11-09 17:17:04.747700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.037 [2024-11-09 17:17:04.747755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171a17 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.037 [2024-11-09 17:17:04.747768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.037 #38 NEW cov: 11807 ft: 14454 corp: 27/724b lim: 40 exec/s: 38 rss: 70Mb L: 23/40 MS: 1 ShuffleBytes- 00:07:45.037 [2024-11-09 17:17:04.788007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.037 [2024-11-09 17:17:04.788032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.037 [2024-11-09 17:17:04.788086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.037 [2024-11-09 17:17:04.788099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.037 [2024-11-09 17:17:04.788153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.037 [2024-11-09 17:17:04.788166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.037 [2024-11-09 17:17:04.788217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.037 [2024-11-09 17:17:04.788229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.297 #39 NEW cov: 11807 ft: 14485 corp: 28/757b lim: 40 exec/s: 39 rss: 70Mb L: 33/40 MS: 1 EraseBytes- 00:07:45.297 [2024-11-09 17:17:04.827878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:6363636f cdw11:636f6363 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.297 [2024-11-09 17:17:04.827903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.297 [2024-11-09 17:17:04.827956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:63636363 cdw11:6363630a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.297 [2024-11-09 17:17:04.827969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.297 #40 NEW cov: 11807 ft: 14490 corp: 29/773b lim: 40 exec/s: 40 rss: 70Mb L: 16/40 MS: 1 CopyPart- 00:07:45.297 [2024-11-09 17:17:04.868412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.297 [2024-11-09 17:17:04.868437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.297 [2024-11-09 17:17:04.868493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.297 [2024-11-09 17:17:04.868506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.297 [2024-11-09 17:17:04.868563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.297 [2024-11-09 17:17:04.868576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.297 [2024-11-09 17:17:04.868648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:4a4a4a4a cdw11:4a171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.297 [2024-11-09 17:17:04.868661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.297 [2024-11-09 17:17:04.868718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.297 [2024-11-09 17:17:04.868731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:45.297 #41 NEW cov: 11807 ft: 14506 corp: 30/813b lim: 40 exec/s: 41 rss: 70Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:45.297 [2024-11-09 17:17:04.908101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.297 [2024-11-09 17:17:04.908126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.297 [2024-11-09 17:17:04.908198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.297 [2024-11-09 17:17:04.908212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.297 #42 NEW cov: 11807 ft: 14538 corp: 31/835b lim: 40 exec/s: 42 rss: 70Mb L: 22/40 MS: 1 EraseBytes- 00:07:45.297 [2024-11-09 17:17:04.948596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.297 [2024-11-09 17:17:04.948621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.297 [2024-11-09 17:17:04.948691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171717 cdw11:17176017 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.297 [2024-11-09 17:17:04.948705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.297 [2024-11-09 17:17:04.948758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.297 [2024-11-09 17:17:04.948771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.297 [2024-11-09 17:17:04.948824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.297 [2024-11-09 17:17:04.948837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.297 [2024-11-09 17:17:04.948891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.297 [2024-11-09 17:17:04.948904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:45.297 #43 NEW cov: 11807 ft: 14553 corp: 32/875b lim: 40 exec/s: 43 rss: 70Mb L: 40/40 MS: 1 ChangeByte- 00:07:45.297 [2024-11-09 17:17:04.988442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a17c017 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.297 [2024-11-09 17:17:04.988466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.297 [2024-11-09 17:17:04.988521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:1717172d cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.297 [2024-11-09 17:17:04.988538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.297 [2024-11-09 17:17:04.988610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.297 [2024-11-09 17:17:04.988626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.297 #44 NEW cov: 11807 ft: 14759 corp: 33/899b lim: 40 exec/s: 44 rss: 70Mb L: 24/40 MS: 1 InsertByte- 00:07:45.297 [2024-11-09 17:17:05.028442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.297 [2024-11-09 17:17:05.028466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.297 [2024-11-09 17:17:05.028519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171717 cdw11:171a1717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.297 [2024-11-09 17:17:05.028537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.297 #45 NEW cov: 11807 ft: 14769 corp: 34/922b lim: 40 exec/s: 45 rss: 70Mb L: 23/40 MS: 1 ShuffleBytes- 00:07:45.576 [2024-11-09 17:17:05.068984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.576 [2024-11-09 17:17:05.069009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.576 [2024-11-09 17:17:05.069063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171717 cdw11:30171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.576 [2024-11-09 17:17:05.069077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.576 [2024-11-09 17:17:05.069130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.576 [2024-11-09 17:17:05.069143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.576 [2024-11-09 17:17:05.069197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.576 [2024-11-09 17:17:05.069210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.576 [2024-11-09 17:17:05.069264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.576 [2024-11-09 17:17:05.069277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:45.576 #46 NEW cov: 11807 ft: 14799 corp: 35/962b lim: 40 exec/s: 46 rss: 70Mb L: 40/40 MS: 1 ChangeByte- 00:07:45.576 [2024-11-09 17:17:05.108817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.576 [2024-11-09 17:17:05.108841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.576 [2024-11-09 17:17:05.108913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:02171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.576 [2024-11-09 17:17:05.108927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.576 [2024-11-09 17:17:05.108981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.576 [2024-11-09 17:17:05.108994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.576 #47 NEW cov: 11807 ft: 14805 corp: 36/993b lim: 40 exec/s: 47 rss: 70Mb L: 31/40 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\002"- 00:07:45.576 [2024-11-09 17:17:05.149068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.576 [2024-11-09 17:17:05.149092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.576 [2024-11-09 17:17:05.149148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.576 [2024-11-09 17:17:05.149161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.576 [2024-11-09 17:17:05.149232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.576 [2024-11-09 17:17:05.149246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.576 [2024-11-09 17:17:05.149300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.576 [2024-11-09 17:17:05.149312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.576 #48 NEW cov: 11807 ft: 14818 corp: 37/1028b lim: 40 exec/s: 48 rss: 70Mb L: 35/40 MS: 1 ShuffleBytes- 00:07:45.576 [2024-11-09 17:17:05.188909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.576 [2024-11-09 17:17:05.188933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.576 [2024-11-09 17:17:05.188987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171716 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.576 [2024-11-09 17:17:05.189001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.576 #49 NEW cov: 11807 ft: 14825 corp: 38/1049b lim: 40 exec/s: 49 rss: 70Mb L: 21/40 MS: 1 ChangeBit- 00:07:45.577 [2024-11-09 17:17:05.229165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:171717e9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.577 [2024-11-09 17:17:05.229189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.577 [2024-11-09 17:17:05.229243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:1717171a cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.577 [2024-11-09 17:17:05.229255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.577 [2024-11-09 17:17:05.229308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.577 [2024-11-09 17:17:05.229321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.577 #50 NEW cov: 11807 ft: 14834 corp: 39/1073b lim: 40 exec/s: 50 rss: 70Mb L: 24/40 MS: 1 InsertByte- 00:07:45.577 [2024-11-09 17:17:05.269139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:1717e717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.577 [2024-11-09 17:17:05.269164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.577 [2024-11-09 17:17:05.269219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.577 [2024-11-09 17:17:05.269233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.577 #51 NEW cov: 11807 ft: 14843 corp: 40/1094b lim: 40 exec/s: 51 rss: 70Mb L: 21/40 MS: 1 ChangeBinInt- 00:07:45.577 [2024-11-09 17:17:05.309483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17e9e417 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.577 [2024-11-09 17:17:05.309508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.577 [2024-11-09 17:17:05.309581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171a17 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.577 [2024-11-09 17:17:05.309595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.577 [2024-11-09 17:17:05.309649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.577 [2024-11-09 17:17:05.309663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.577 [2024-11-09 17:17:05.309726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.577 [2024-11-09 17:17:05.309740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.868 #52 NEW cov: 11807 ft: 14856 corp: 41/1131b lim: 40 exec/s: 52 rss: 70Mb L: 37/40 MS: 1 CrossOver- 00:07:45.868 [2024-11-09 17:17:05.349366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.868 [2024-11-09 17:17:05.349391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.868 [2024-11-09 17:17:05.349448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171717 cdw11:171a1717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.868 [2024-11-09 17:17:05.349461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.868 #53 NEW cov: 11807 ft: 14866 corp: 42/1154b lim: 40 exec/s: 53 rss: 70Mb L: 23/40 MS: 1 ChangeByte- 00:07:45.868 [2024-11-09 17:17:05.389500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.868 [2024-11-09 17:17:05.389530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.868 [2024-11-09 17:17:05.389587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17172d17 cdw11:17ee1717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.868 [2024-11-09 17:17:05.389600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.868 #54 NEW cov: 11807 ft: 14884 corp: 43/1177b lim: 40 exec/s: 54 rss: 70Mb L: 23/40 MS: 1 ChangeBinInt- 00:07:45.868 [2024-11-09 17:17:05.429863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17171e17 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.868 [2024-11-09 17:17:05.429887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.868 [2024-11-09 17:17:05.429959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.868 [2024-11-09 17:17:05.429974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.868 [2024-11-09 17:17:05.430028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.868 [2024-11-09 17:17:05.430044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.868 [2024-11-09 17:17:05.430096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.868 [2024-11-09 17:17:05.430109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.868 #55 NEW cov: 11807 ft: 14902 corp: 44/1214b lim: 40 exec/s: 27 rss: 70Mb L: 37/40 MS: 1 CrossOver- 00:07:45.868 #55 DONE cov: 11807 ft: 14902 corp: 44/1214b lim: 40 exec/s: 27 rss: 70Mb 00:07:45.868 ###### Recommended dictionary. ###### 00:07:45.868 "\001\016" # Uses: 0 00:07:45.868 "\001\000\000\000\000\000\000\002" # Uses: 0 00:07:45.868 ###### End of recommended dictionary. ###### 00:07:45.868 Done 55 runs in 2 second(s) 00:07:45.868 17:17:05 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_10.conf 00:07:45.868 17:17:05 -- ../common.sh@72 -- # (( i++ )) 00:07:45.868 17:17:05 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:45.868 17:17:05 -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:07:45.868 17:17:05 -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:07:45.868 17:17:05 -- nvmf/run.sh@24 -- # local timen=1 00:07:45.868 17:17:05 -- nvmf/run.sh@25 -- # local core=0x1 00:07:45.869 17:17:05 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:45.869 17:17:05 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:07:45.869 17:17:05 -- nvmf/run.sh@29 -- # printf %02d 11 00:07:45.869 17:17:05 -- nvmf/run.sh@29 -- # port=4411 00:07:45.869 17:17:05 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:45.869 17:17:05 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:07:45.869 17:17:05 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:45.869 17:17:05 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 -r /var/tmp/spdk11.sock 00:07:45.869 [2024-11-09 17:17:05.623667] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:45.869 [2024-11-09 17:17:05.623764] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid370671 ] 00:07:46.128 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.128 [2024-11-09 17:17:05.809984] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.128 [2024-11-09 17:17:05.873737] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:46.128 [2024-11-09 17:17:05.873881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.388 [2024-11-09 17:17:05.932263] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.388 [2024-11-09 17:17:05.948660] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:07:46.388 INFO: Running with entropic power schedule (0xFF, 100). 00:07:46.388 INFO: Seed: 2530186636 00:07:46.388 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:46.388 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:46.388 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:46.388 INFO: A corpus is not provided, starting from an empty corpus 00:07:46.388 #2 INITED exec/s: 0 rss: 61Mb 00:07:46.388 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:46.388 This may also happen if the target rejected all inputs we tried so far 00:07:46.388 [2024-11-09 17:17:06.004148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.388 [2024-11-09 17:17:06.004183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.388 [2024-11-09 17:17:06.004242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.388 [2024-11-09 17:17:06.004256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.388 [2024-11-09 17:17:06.004311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.388 [2024-11-09 17:17:06.004324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.647 NEW_FUNC[1/671]: 0x4493f8 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:07:46.647 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:46.647 #4 NEW cov: 11592 ft: 11593 corp: 2/28b lim: 40 exec/s: 0 rss: 68Mb L: 27/27 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:46.647 [2024-11-09 17:17:06.304967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.647 [2024-11-09 17:17:06.304999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.647 [2024-11-09 17:17:06.305077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.647 [2024-11-09 17:17:06.305092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.647 [2024-11-09 17:17:06.305150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.647 [2024-11-09 17:17:06.305164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.647 #6 NEW cov: 11705 ft: 12089 corp: 3/57b lim: 40 exec/s: 0 rss: 68Mb L: 29/29 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:46.647 [2024-11-09 17:17:06.345000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.647 [2024-11-09 17:17:06.345026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.647 [2024-11-09 17:17:06.345084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.648 [2024-11-09 17:17:06.345098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.648 [2024-11-09 17:17:06.345157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000001b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.648 [2024-11-09 17:17:06.345170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.648 #7 NEW cov: 11711 ft: 12295 corp: 4/84b lim: 40 exec/s: 0 rss: 68Mb L: 27/29 MS: 1 ChangeBinInt- 00:07:46.648 [2024-11-09 17:17:06.385084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.648 [2024-11-09 17:17:06.385110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.648 [2024-11-09 17:17:06.385170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.648 [2024-11-09 17:17:06.385190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.648 [2024-11-09 17:17:06.385249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.648 [2024-11-09 17:17:06.385262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.648 #8 NEW cov: 11796 ft: 12575 corp: 5/112b lim: 40 exec/s: 0 rss: 68Mb L: 28/29 MS: 1 CopyPart- 00:07:46.907 [2024-11-09 17:17:06.425048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.907 [2024-11-09 17:17:06.425074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.907 [2024-11-09 17:17:06.425150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.908 [2024-11-09 17:17:06.425164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.908 #9 NEW cov: 11796 ft: 13027 corp: 6/135b lim: 40 exec/s: 0 rss: 68Mb L: 23/29 MS: 1 EraseBytes- 00:07:46.908 [2024-11-09 17:17:06.465567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d2d0ac2 cdw11:c20ac2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.908 [2024-11-09 17:17:06.465593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.908 [2024-11-09 17:17:06.465657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.908 [2024-11-09 17:17:06.465672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.908 [2024-11-09 17:17:06.465733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.908 [2024-11-09 17:17:06.465747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.908 [2024-11-09 17:17:06.465809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c2c2c2c2 cdw11:0000c200 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.908 [2024-11-09 17:17:06.465822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.908 #10 NEW cov: 11796 ft: 13409 corp: 7/173b lim: 40 exec/s: 0 rss: 68Mb L: 38/38 MS: 1 CrossOver- 00:07:46.908 [2024-11-09 17:17:06.505450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.908 [2024-11-09 17:17:06.505475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.908 [2024-11-09 17:17:06.505555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.908 [2024-11-09 17:17:06.505570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.908 [2024-11-09 17:17:06.505629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.908 [2024-11-09 17:17:06.505642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.908 #11 NEW cov: 11796 ft: 13512 corp: 8/201b lim: 40 exec/s: 0 rss: 68Mb L: 28/38 MS: 1 ChangeBinInt- 00:07:46.908 [2024-11-09 17:17:06.545715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.908 [2024-11-09 17:17:06.545740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.908 [2024-11-09 17:17:06.545817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.908 [2024-11-09 17:17:06.545831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.908 [2024-11-09 17:17:06.545891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:000000c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.908 [2024-11-09 17:17:06.545905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.908 [2024-11-09 17:17:06.545966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.908 [2024-11-09 17:17:06.545980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.908 #12 NEW cov: 11796 ft: 13518 corp: 9/237b lim: 40 exec/s: 0 rss: 68Mb L: 36/38 MS: 1 InsertRepeatedBytes- 00:07:46.908 [2024-11-09 17:17:06.585679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.908 [2024-11-09 17:17:06.585706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.908 [2024-11-09 17:17:06.585767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:c2c246c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.908 [2024-11-09 17:17:06.585781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.908 [2024-11-09 17:17:06.585854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000001b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.908 [2024-11-09 17:17:06.585868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.908 #13 NEW cov: 11796 ft: 13593 corp: 10/264b lim: 40 exec/s: 0 rss: 68Mb L: 27/38 MS: 1 ChangeBinInt- 00:07:46.908 [2024-11-09 17:17:06.625772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac2c2 cdw11:c227c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.908 [2024-11-09 17:17:06.625798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.908 [2024-11-09 17:17:06.625860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.908 [2024-11-09 17:17:06.625874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.908 [2024-11-09 17:17:06.625932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.908 [2024-11-09 17:17:06.625944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.908 #14 NEW cov: 11796 ft: 13671 corp: 11/292b lim: 40 exec/s: 0 rss: 68Mb L: 28/38 MS: 1 InsertByte- 00:07:46.908 [2024-11-09 17:17:06.665728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ac2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.908 [2024-11-09 17:17:06.665753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.908 [2024-11-09 17:17:06.665831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.908 [2024-11-09 17:17:06.665845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.168 #15 NEW cov: 11796 ft: 13680 corp: 12/311b lim: 40 exec/s: 0 rss: 68Mb L: 19/38 MS: 1 CrossOver- 00:07:47.168 [2024-11-09 17:17:06.706202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.168 [2024-11-09 17:17:06.706227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.168 [2024-11-09 17:17:06.706305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.168 [2024-11-09 17:17:06.706319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.168 [2024-11-09 17:17:06.706379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:000000c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.168 [2024-11-09 17:17:06.706393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.168 [2024-11-09 17:17:06.706451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:4d4d4d00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.168 [2024-11-09 17:17:06.706464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.168 #16 NEW cov: 11796 ft: 13718 corp: 13/350b lim: 40 exec/s: 0 rss: 69Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:07:47.168 [2024-11-09 17:17:06.746279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d2d0ac2 cdw11:c20ac2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.168 [2024-11-09 17:17:06.746304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.168 [2024-11-09 17:17:06.746365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.168 [2024-11-09 17:17:06.746379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.168 [2024-11-09 17:17:06.746438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.168 [2024-11-09 17:17:06.746451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.168 [2024-11-09 17:17:06.746511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c2c2c2c2 cdw11:0000c200 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.168 [2024-11-09 17:17:06.746528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.168 #17 NEW cov: 11796 ft: 13732 corp: 14/388b lim: 40 exec/s: 0 rss: 69Mb L: 38/39 MS: 1 ShuffleBytes- 00:07:47.168 [2024-11-09 17:17:06.786284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.168 [2024-11-09 17:17:06.786310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.168 [2024-11-09 17:17:06.786370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.168 [2024-11-09 17:17:06.786384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.168 [2024-11-09 17:17:06.786446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.168 [2024-11-09 17:17:06.786459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.168 #18 NEW cov: 11796 ft: 13786 corp: 15/415b lim: 40 exec/s: 0 rss: 69Mb L: 27/39 MS: 1 CopyPart- 00:07:47.168 [2024-11-09 17:17:06.826538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.168 [2024-11-09 17:17:06.826565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.168 [2024-11-09 17:17:06.826627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.168 [2024-11-09 17:17:06.826640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.168 [2024-11-09 17:17:06.826699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:000000c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.168 [2024-11-09 17:17:06.826712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.168 [2024-11-09 17:17:06.826772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c2c2c2c2 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.168 [2024-11-09 17:17:06.826786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.168 #19 NEW cov: 11796 ft: 13806 corp: 16/451b lim: 40 exec/s: 0 rss: 69Mb L: 36/39 MS: 1 CopyPart- 00:07:47.168 [2024-11-09 17:17:06.866369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.168 [2024-11-09 17:17:06.866395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.168 [2024-11-09 17:17:06.866456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c231c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.168 [2024-11-09 17:17:06.866471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.168 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:47.168 #20 NEW cov: 11819 ft: 13838 corp: 17/474b lim: 40 exec/s: 0 rss: 69Mb L: 23/39 MS: 1 ChangeByte- 00:07:47.168 [2024-11-09 17:17:06.906671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac2c2 cdw11:c227c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.168 [2024-11-09 17:17:06.906697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.168 [2024-11-09 17:17:06.906772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c6c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.168 [2024-11-09 17:17:06.906787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.168 [2024-11-09 17:17:06.906847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.168 [2024-11-09 17:17:06.906861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.168 #21 NEW cov: 11819 ft: 13857 corp: 18/502b lim: 40 exec/s: 0 rss: 69Mb L: 28/39 MS: 1 ChangeBit- 00:07:47.427 [2024-11-09 17:17:06.946922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac2c2 cdw11:c2c2cece SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.427 [2024-11-09 17:17:06.946951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.427 [2024-11-09 17:17:06.947012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:cececece cdw11:cececece SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.428 [2024-11-09 17:17:06.947026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.428 [2024-11-09 17:17:06.947085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c2c2c231 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.428 [2024-11-09 17:17:06.947099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.428 [2024-11-09 17:17:06.947157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c2c20000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.428 [2024-11-09 17:17:06.947170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.428 #22 NEW cov: 11819 ft: 13881 corp: 19/535b lim: 40 exec/s: 0 rss: 69Mb L: 33/39 MS: 1 InsertRepeatedBytes- 00:07:47.428 [2024-11-09 17:17:06.986903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.428 [2024-11-09 17:17:06.986928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.428 [2024-11-09 17:17:06.986991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:c25ec2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.428 [2024-11-09 17:17:06.987004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.428 [2024-11-09 17:17:06.987064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.428 [2024-11-09 17:17:06.987078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.428 #23 NEW cov: 11819 ft: 13924 corp: 20/563b lim: 40 exec/s: 23 rss: 69Mb L: 28/39 MS: 1 ChangeByte- 00:07:47.428 [2024-11-09 17:17:07.026980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac2c2 cdw11:c227c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.428 [2024-11-09 17:17:07.027005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.428 [2024-11-09 17:17:07.027081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.428 [2024-11-09 17:17:07.027096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.428 [2024-11-09 17:17:07.027155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.428 [2024-11-09 17:17:07.027168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.428 #24 NEW cov: 11819 ft: 13954 corp: 21/591b lim: 40 exec/s: 24 rss: 69Mb L: 28/39 MS: 1 CopyPart- 00:07:47.428 [2024-11-09 17:17:07.067315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.428 [2024-11-09 17:17:07.067340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.428 [2024-11-09 17:17:07.067398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.428 [2024-11-09 17:17:07.067414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.428 [2024-11-09 17:17:07.067472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:0d0d0d0d cdw11:0d0dc200 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.428 [2024-11-09 17:17:07.067485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.428 [2024-11-09 17:17:07.067546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00c200c2 cdw11:00000d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.428 [2024-11-09 17:17:07.067559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.428 #25 NEW cov: 11819 ft: 14015 corp: 22/628b lim: 40 exec/s: 25 rss: 69Mb L: 37/39 MS: 1 CrossOver- 00:07:47.428 [2024-11-09 17:17:07.107581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d2d0ac2 cdw11:c20ac2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.428 [2024-11-09 17:17:07.107607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.428 [2024-11-09 17:17:07.107684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.428 [2024-11-09 17:17:07.107698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.428 [2024-11-09 17:17:07.107770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2ca SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.428 [2024-11-09 17:17:07.107784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.428 [2024-11-09 17:17:07.107845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c2c2c2c2 cdw11:0000c200 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.428 [2024-11-09 17:17:07.107859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.428 #26 NEW cov: 11819 ft: 14051 corp: 23/666b lim: 40 exec/s: 26 rss: 70Mb L: 38/39 MS: 1 ChangeBit- 00:07:47.428 [2024-11-09 17:17:07.147658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.428 [2024-11-09 17:17:07.147683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.428 [2024-11-09 17:17:07.147744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.428 [2024-11-09 17:17:07.147758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.428 [2024-11-09 17:17:07.147834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.428 [2024-11-09 17:17:07.147848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.428 [2024-11-09 17:17:07.147908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:0d0d0d0d cdw11:8a0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.428 [2024-11-09 17:17:07.147921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.428 #27 NEW cov: 11819 ft: 14115 corp: 24/702b lim: 40 exec/s: 27 rss: 70Mb L: 36/39 MS: 1 CopyPart- 00:07:47.428 [2024-11-09 17:17:07.187596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.428 [2024-11-09 17:17:07.187626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.428 [2024-11-09 17:17:07.187688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c231c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.428 [2024-11-09 17:17:07.187702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.428 [2024-11-09 17:17:07.187761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:0000002a cdw11:000000c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.428 [2024-11-09 17:17:07.187774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.687 #28 NEW cov: 11819 ft: 14129 corp: 25/726b lim: 40 exec/s: 28 rss: 70Mb L: 24/39 MS: 1 InsertByte- 00:07:47.687 [2024-11-09 17:17:07.227573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac2c2 cdw11:c2c22d2d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.687 [2024-11-09 17:17:07.227598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.687 [2024-11-09 17:17:07.227684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0ac2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.687 [2024-11-09 17:17:07.227698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.687 #29 NEW cov: 11819 ft: 14150 corp: 26/744b lim: 40 exec/s: 29 rss: 70Mb L: 18/39 MS: 1 CrossOver- 00:07:47.687 [2024-11-09 17:17:07.267988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac224 cdw11:c2c2cece SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.687 [2024-11-09 17:17:07.268013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.687 [2024-11-09 17:17:07.268076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:cececece cdw11:cececece SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.687 [2024-11-09 17:17:07.268090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.687 [2024-11-09 17:17:07.268151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c2c2c231 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.687 [2024-11-09 17:17:07.268164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.687 [2024-11-09 17:17:07.268222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c2c20000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.687 [2024-11-09 17:17:07.268235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.687 #30 NEW cov: 11819 ft: 14163 corp: 27/777b lim: 40 exec/s: 30 rss: 70Mb L: 33/39 MS: 1 ChangeByte- 00:07:47.687 [2024-11-09 17:17:07.308128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d2dc2c2 cdw11:c2c2c200 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.687 [2024-11-09 17:17:07.308154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.687 [2024-11-09 17:17:07.308212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00c200c2 cdw11:0000c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.687 [2024-11-09 17:17:07.308226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.687 [2024-11-09 17:17:07.308284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.687 [2024-11-09 17:17:07.308300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.687 [2024-11-09 17:17:07.308358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c2c2c2c2 cdw11:0000c200 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.687 [2024-11-09 17:17:07.308371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.687 #31 NEW cov: 11819 ft: 14177 corp: 28/815b lim: 40 exec/s: 31 rss: 70Mb L: 38/39 MS: 1 CopyPart- 00:07:47.687 [2024-11-09 17:17:07.348207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac2c2 cdw11:0cc2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.687 [2024-11-09 17:17:07.348233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.687 [2024-11-09 17:17:07.348293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.687 [2024-11-09 17:17:07.348307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.687 [2024-11-09 17:17:07.348368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:000000c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.687 [2024-11-09 17:17:07.348381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.687 [2024-11-09 17:17:07.348439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c2c2c2c2 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.687 [2024-11-09 17:17:07.348451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.687 #32 NEW cov: 11819 ft: 14266 corp: 29/851b lim: 40 exec/s: 32 rss: 70Mb L: 36/39 MS: 1 ChangeByte- 00:07:47.687 [2024-11-09 17:17:07.388188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:c2c2c200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.687 [2024-11-09 17:17:07.388213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.687 [2024-11-09 17:17:07.388290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:000000c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.687 [2024-11-09 17:17:07.388304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.687 [2024-11-09 17:17:07.388365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.687 [2024-11-09 17:17:07.388378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.688 #33 NEW cov: 11819 ft: 14276 corp: 30/879b lim: 40 exec/s: 33 rss: 70Mb L: 28/39 MS: 1 EraseBytes- 00:07:47.688 [2024-11-09 17:17:07.428293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac2c2 cdw11:c227c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.688 [2024-11-09 17:17:07.428318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.688 [2024-11-09 17:17:07.428393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.688 [2024-11-09 17:17:07.428407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.688 [2024-11-09 17:17:07.428466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.688 [2024-11-09 17:17:07.428483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.688 #34 NEW cov: 11819 ft: 14356 corp: 31/907b lim: 40 exec/s: 34 rss: 70Mb L: 28/39 MS: 1 ShuffleBytes- 00:07:47.947 [2024-11-09 17:17:07.468716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.947 [2024-11-09 17:17:07.468741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.947 [2024-11-09 17:17:07.468804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.947 [2024-11-09 17:17:07.468818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.947 [2024-11-09 17:17:07.468876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:000000c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.947 [2024-11-09 17:17:07.468890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.947 [2024-11-09 17:17:07.468949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00003100 cdw11:004d4d4d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.947 [2024-11-09 17:17:07.468961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.947 [2024-11-09 17:17:07.469023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:1bc2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.947 [2024-11-09 17:17:07.469036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:47.947 #35 NEW cov: 11819 ft: 14479 corp: 32/947b lim: 40 exec/s: 35 rss: 70Mb L: 40/40 MS: 1 InsertByte- 00:07:47.947 [2024-11-09 17:17:07.508682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d2d0ac2 cdw11:c20ac2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.947 [2024-11-09 17:17:07.508707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.947 [2024-11-09 17:17:07.508768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.947 [2024-11-09 17:17:07.508782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.947 [2024-11-09 17:17:07.508843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2ca SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.947 [2024-11-09 17:17:07.508856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.947 [2024-11-09 17:17:07.508915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c2c2c2c2 cdw11:0000c200 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.947 [2024-11-09 17:17:07.508928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.947 #36 NEW cov: 11819 ft: 14521 corp: 33/985b lim: 40 exec/s: 36 rss: 70Mb L: 38/40 MS: 1 ShuffleBytes- 00:07:47.947 [2024-11-09 17:17:07.548814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac2c2 cdw11:c2c227c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.947 [2024-11-09 17:17:07.548839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.947 [2024-11-09 17:17:07.548902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:27c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.947 [2024-11-09 17:17:07.548919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.947 [2024-11-09 17:17:07.548979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.947 [2024-11-09 17:17:07.548992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.947 [2024-11-09 17:17:07.549051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.947 [2024-11-09 17:17:07.549063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.947 #37 NEW cov: 11819 ft: 14522 corp: 34/1020b lim: 40 exec/s: 37 rss: 70Mb L: 35/40 MS: 1 CopyPart- 00:07:47.947 [2024-11-09 17:17:07.588757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.947 [2024-11-09 17:17:07.588782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.947 [2024-11-09 17:17:07.588846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:c25ec2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.947 [2024-11-09 17:17:07.588859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.947 [2024-11-09 17:17:07.588920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.947 [2024-11-09 17:17:07.588934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.947 #38 NEW cov: 11819 ft: 14525 corp: 35/1048b lim: 40 exec/s: 38 rss: 70Mb L: 28/40 MS: 1 ShuffleBytes- 00:07:47.947 [2024-11-09 17:17:07.628867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d2d0ac2 cdw11:c20ac2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.947 [2024-11-09 17:17:07.628892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.947 [2024-11-09 17:17:07.628973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.947 [2024-11-09 17:17:07.628987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.947 [2024-11-09 17:17:07.629048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2ca SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.947 [2024-11-09 17:17:07.629062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.947 #39 NEW cov: 11819 ft: 14532 corp: 36/1077b lim: 40 exec/s: 39 rss: 70Mb L: 29/40 MS: 1 EraseBytes- 00:07:47.947 [2024-11-09 17:17:07.669141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac2c2 cdw11:c2c2c211 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.947 [2024-11-09 17:17:07.669167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.947 [2024-11-09 17:17:07.669230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11111111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.947 [2024-11-09 17:17:07.669244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.947 [2024-11-09 17:17:07.669305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.947 [2024-11-09 17:17:07.669321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.947 [2024-11-09 17:17:07.669380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c2000000 cdw11:00000100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.947 [2024-11-09 17:17:07.669393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.947 [2024-11-09 17:17:07.709283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac2c2 cdw11:c2c2c211 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.947 [2024-11-09 17:17:07.709307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.947 [2024-11-09 17:17:07.709384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11111111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.947 [2024-11-09 17:17:07.709398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.947 [2024-11-09 17:17:07.709458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.947 [2024-11-09 17:17:07.709472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.947 [2024-11-09 17:17:07.709536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c2000000 cdw11:00c20000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.947 [2024-11-09 17:17:07.709550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.207 #41 NEW cov: 11819 ft: 14566 corp: 37/1114b lim: 40 exec/s: 41 rss: 70Mb L: 37/40 MS: 2 InsertRepeatedBytes-CrossOver- 00:07:48.207 [2024-11-09 17:17:07.749222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac231 cdw11:c2c227c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.207 [2024-11-09 17:17:07.749247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.207 [2024-11-09 17:17:07.749311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c6c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.207 [2024-11-09 17:17:07.749324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.207 [2024-11-09 17:17:07.749385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.207 [2024-11-09 17:17:07.749399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.207 #42 NEW cov: 11819 ft: 14600 corp: 38/1143b lim: 40 exec/s: 42 rss: 70Mb L: 29/40 MS: 1 InsertByte- 00:07:48.207 [2024-11-09 17:17:07.789273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d2d0ac2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.207 [2024-11-09 17:17:07.789299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.207 [2024-11-09 17:17:07.789378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0ac2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.207 [2024-11-09 17:17:07.789392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.207 [2024-11-09 17:17:07.789455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2ca SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.207 [2024-11-09 17:17:07.789471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.207 #43 NEW cov: 11819 ft: 14605 corp: 39/1172b lim: 40 exec/s: 43 rss: 70Mb L: 29/40 MS: 1 ShuffleBytes- 00:07:48.207 [2024-11-09 17:17:07.829561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.207 [2024-11-09 17:17:07.829587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.207 [2024-11-09 17:17:07.829666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.207 [2024-11-09 17:17:07.829690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.207 [2024-11-09 17:17:07.829751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:000000c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.207 [2024-11-09 17:17:07.829764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.207 [2024-11-09 17:17:07.829821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.207 [2024-11-09 17:17:07.829834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.207 #44 NEW cov: 11819 ft: 14610 corp: 40/1209b lim: 40 exec/s: 44 rss: 70Mb L: 37/40 MS: 1 InsertByte- 00:07:48.207 [2024-11-09 17:17:07.869887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac2c2 cdw11:0c000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.207 [2024-11-09 17:17:07.869911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.207 [2024-11-09 17:17:07.869974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00c2c2c2 cdw11:c2c2c200 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.207 [2024-11-09 17:17:07.869988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.207 [2024-11-09 17:17:07.870048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:000000c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.207 [2024-11-09 17:17:07.870061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.207 [2024-11-09 17:17:07.870121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.207 [2024-11-09 17:17:07.870135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.207 [2024-11-09 17:17:07.870196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:1bc2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.207 [2024-11-09 17:17:07.870209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:48.207 #45 NEW cov: 11819 ft: 14674 corp: 41/1249b lim: 40 exec/s: 45 rss: 70Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:48.207 [2024-11-09 17:17:07.919881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.207 [2024-11-09 17:17:07.919907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.207 [2024-11-09 17:17:07.919968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.207 [2024-11-09 17:17:07.919985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.207 [2024-11-09 17:17:07.920045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:000000c2 cdw11:c2c20000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.207 [2024-11-09 17:17:07.920058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.207 [2024-11-09 17:17:07.920116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:0000c2c2 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.207 [2024-11-09 17:17:07.920129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.207 #46 NEW cov: 11819 ft: 14691 corp: 42/1285b lim: 40 exec/s: 46 rss: 70Mb L: 36/40 MS: 1 ShuffleBytes- 00:07:48.207 [2024-11-09 17:17:07.959807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.207 [2024-11-09 17:17:07.959832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.207 [2024-11-09 17:17:07.959896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c25e cdw11:c2c20000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.207 [2024-11-09 17:17:07.959910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.207 [2024-11-09 17:17:07.959985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00001bc2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.207 [2024-11-09 17:17:07.959999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.467 #47 NEW cov: 11819 ft: 14711 corp: 43/1311b lim: 40 exec/s: 47 rss: 70Mb L: 26/40 MS: 1 EraseBytes- 00:07:48.467 [2024-11-09 17:17:07.999919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2d0ac2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.467 [2024-11-09 17:17:07.999943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.467 [2024-11-09 17:17:08.000003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.467 [2024-11-09 17:17:08.000017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.467 [2024-11-09 17:17:08.000078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.467 [2024-11-09 17:17:08.000091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.467 #48 NEW cov: 11819 ft: 14715 corp: 44/1338b lim: 40 exec/s: 24 rss: 70Mb L: 27/40 MS: 1 ShuffleBytes- 00:07:48.467 #48 DONE cov: 11819 ft: 14715 corp: 44/1338b lim: 40 exec/s: 24 rss: 70Mb 00:07:48.467 Done 48 runs in 2 second(s) 00:07:48.467 17:17:08 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_11.conf 00:07:48.467 17:17:08 -- ../common.sh@72 -- # (( i++ )) 00:07:48.467 17:17:08 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:48.467 17:17:08 -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:07:48.467 17:17:08 -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:07:48.467 17:17:08 -- nvmf/run.sh@24 -- # local timen=1 00:07:48.467 17:17:08 -- nvmf/run.sh@25 -- # local core=0x1 00:07:48.467 17:17:08 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:48.467 17:17:08 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:07:48.467 17:17:08 -- nvmf/run.sh@29 -- # printf %02d 12 00:07:48.467 17:17:08 -- nvmf/run.sh@29 -- # port=4412 00:07:48.467 17:17:08 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:48.467 17:17:08 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:07:48.467 17:17:08 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:48.467 17:17:08 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 -r /var/tmp/spdk12.sock 00:07:48.467 [2024-11-09 17:17:08.182861] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:48.467 [2024-11-09 17:17:08.182930] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid371072 ] 00:07:48.467 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.727 [2024-11-09 17:17:08.368893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.727 [2024-11-09 17:17:08.433213] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:48.727 [2024-11-09 17:17:08.433355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.727 [2024-11-09 17:17:08.491457] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.987 [2024-11-09 17:17:08.507838] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:07:48.987 INFO: Running with entropic power schedule (0xFF, 100). 00:07:48.987 INFO: Seed: 794209440 00:07:48.987 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:48.987 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:48.987 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:48.987 INFO: A corpus is not provided, starting from an empty corpus 00:07:48.987 #2 INITED exec/s: 0 rss: 60Mb 00:07:48.987 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:48.987 This may also happen if the target rejected all inputs we tried so far 00:07:48.987 [2024-11-09 17:17:08.585161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.987 [2024-11-09 17:17:08.585201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.987 [2024-11-09 17:17:08.585336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.987 [2024-11-09 17:17:08.585354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.987 [2024-11-09 17:17:08.585477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.987 [2024-11-09 17:17:08.585494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.987 [2024-11-09 17:17:08.585621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.987 [2024-11-09 17:17:08.585640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.987 [2024-11-09 17:17:08.585766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:24242424 cdw11:2424240a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.987 [2024-11-09 17:17:08.585783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:49.246 NEW_FUNC[1/668]: 0x44b168 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:07:49.246 NEW_FUNC[2/668]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:49.246 #3 NEW cov: 11562 ft: 11567 corp: 2/41b lim: 40 exec/s: 0 rss: 68Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:49.246 [2024-11-09 17:17:08.924757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.246 [2024-11-09 17:17:08.924794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.246 NEW_FUNC[1/3]: 0x151e658 in nvme_ctrlr_process_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:3790 00:07:49.246 NEW_FUNC[2/3]: 0x16ee128 in spdk_nvme_probe_poll_async /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme.c:1507 00:07:49.246 #9 NEW cov: 11703 ft: 13101 corp: 3/53b lim: 40 exec/s: 0 rss: 68Mb L: 12/40 MS: 1 CrossOver- 00:07:49.246 [2024-11-09 17:17:08.984859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a2c2424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.246 [2024-11-09 17:17:08.984887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.246 #10 NEW cov: 11709 ft: 13350 corp: 4/65b lim: 40 exec/s: 0 rss: 68Mb L: 12/40 MS: 1 ChangeByte- 00:07:49.505 [2024-11-09 17:17:09.034996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a2c2424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.505 [2024-11-09 17:17:09.035026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.505 #11 NEW cov: 11794 ft: 13687 corp: 5/74b lim: 40 exec/s: 0 rss: 68Mb L: 9/40 MS: 1 CrossOver- 00:07:49.505 [2024-11-09 17:17:09.086284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.505 [2024-11-09 17:17:09.086312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.505 [2024-11-09 17:17:09.086434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.505 [2024-11-09 17:17:09.086450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.505 [2024-11-09 17:17:09.086584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.505 [2024-11-09 17:17:09.086601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.505 [2024-11-09 17:17:09.086726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.505 [2024-11-09 17:17:09.086742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.505 [2024-11-09 17:17:09.086872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:24242424 cdw11:2424240a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.505 [2024-11-09 17:17:09.086888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:49.505 #12 NEW cov: 11794 ft: 13850 corp: 6/114b lim: 40 exec/s: 0 rss: 68Mb L: 40/40 MS: 1 CopyPart- 00:07:49.505 [2024-11-09 17:17:09.146487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.505 [2024-11-09 17:17:09.146515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.505 [2024-11-09 17:17:09.146645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.505 [2024-11-09 17:17:09.146662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.505 [2024-11-09 17:17:09.146787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.505 [2024-11-09 17:17:09.146803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.505 [2024-11-09 17:17:09.146903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.505 [2024-11-09 17:17:09.146918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.506 [2024-11-09 17:17:09.147042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:24242424 cdw11:2424240a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.506 [2024-11-09 17:17:09.147061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:49.506 #13 NEW cov: 11794 ft: 13946 corp: 7/154b lim: 40 exec/s: 0 rss: 68Mb L: 40/40 MS: 1 CrossOver- 00:07:49.506 [2024-11-09 17:17:09.196704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.506 [2024-11-09 17:17:09.196733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.506 [2024-11-09 17:17:09.196863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.506 [2024-11-09 17:17:09.196880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.506 [2024-11-09 17:17:09.197001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.506 [2024-11-09 17:17:09.197017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.506 [2024-11-09 17:17:09.197144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.506 [2024-11-09 17:17:09.197162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.506 [2024-11-09 17:17:09.197285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:24242424 cdw11:24242480 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.506 [2024-11-09 17:17:09.197300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:49.506 #14 NEW cov: 11794 ft: 13984 corp: 8/194b lim: 40 exec/s: 0 rss: 68Mb L: 40/40 MS: 1 ChangeByte- 00:07:49.506 [2024-11-09 17:17:09.256851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.506 [2024-11-09 17:17:09.256879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.506 [2024-11-09 17:17:09.257007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.506 [2024-11-09 17:17:09.257026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.506 [2024-11-09 17:17:09.257159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.506 [2024-11-09 17:17:09.257178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.506 [2024-11-09 17:17:09.257304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:24242424 cdw11:24240a24 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.506 [2024-11-09 17:17:09.257323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.506 [2024-11-09 17:17:09.257452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:24242424 cdw11:24242480 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.506 [2024-11-09 17:17:09.257469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:49.765 #20 NEW cov: 11794 ft: 14005 corp: 9/234b lim: 40 exec/s: 0 rss: 69Mb L: 40/40 MS: 1 CrossOver- 00:07:49.766 [2024-11-09 17:17:09.316682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.766 [2024-11-09 17:17:09.316710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.766 [2024-11-09 17:17:09.316833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.766 [2024-11-09 17:17:09.316850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.766 [2024-11-09 17:17:09.316965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.766 [2024-11-09 17:17:09.316981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.766 [2024-11-09 17:17:09.317107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:24242424 cdw11:24240a24 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.766 [2024-11-09 17:17:09.317122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.766 #21 NEW cov: 11794 ft: 14137 corp: 10/268b lim: 40 exec/s: 0 rss: 69Mb L: 34/40 MS: 1 EraseBytes- 00:07:49.766 [2024-11-09 17:17:09.377220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.766 [2024-11-09 17:17:09.377247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.766 [2024-11-09 17:17:09.377366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.766 [2024-11-09 17:17:09.377400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.766 [2024-11-09 17:17:09.377529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.766 [2024-11-09 17:17:09.377546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.766 [2024-11-09 17:17:09.377684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.766 [2024-11-09 17:17:09.377701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.766 [2024-11-09 17:17:09.377832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.766 [2024-11-09 17:17:09.377853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:49.766 #22 NEW cov: 11794 ft: 14181 corp: 11/308b lim: 40 exec/s: 0 rss: 69Mb L: 40/40 MS: 1 CrossOver- 00:07:49.766 [2024-11-09 17:17:09.427386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.766 [2024-11-09 17:17:09.427415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.766 [2024-11-09 17:17:09.427546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.766 [2024-11-09 17:17:09.427565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.766 [2024-11-09 17:17:09.427699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.766 [2024-11-09 17:17:09.427715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.766 [2024-11-09 17:17:09.427841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.766 [2024-11-09 17:17:09.427871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.766 [2024-11-09 17:17:09.427996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:24242424 cdw11:2424240a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.766 [2024-11-09 17:17:09.428011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:49.766 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:49.766 #23 NEW cov: 11817 ft: 14223 corp: 12/348b lim: 40 exec/s: 0 rss: 69Mb L: 40/40 MS: 1 CopyPart- 00:07:49.766 [2024-11-09 17:17:09.477354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.766 [2024-11-09 17:17:09.477383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.766 [2024-11-09 17:17:09.477510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.766 [2024-11-09 17:17:09.477532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.766 [2024-11-09 17:17:09.477667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.766 [2024-11-09 17:17:09.477685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.766 [2024-11-09 17:17:09.477817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.766 [2024-11-09 17:17:09.477834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.766 #24 NEW cov: 11817 ft: 14338 corp: 13/386b lim: 40 exec/s: 0 rss: 69Mb L: 38/40 MS: 1 EraseBytes- 00:07:49.766 [2024-11-09 17:17:09.527880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.766 [2024-11-09 17:17:09.527908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.766 [2024-11-09 17:17:09.528042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.766 [2024-11-09 17:17:09.528059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.766 [2024-11-09 17:17:09.528189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.766 [2024-11-09 17:17:09.528205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.766 [2024-11-09 17:17:09.528330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.766 [2024-11-09 17:17:09.528348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.766 [2024-11-09 17:17:09.528473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.766 [2024-11-09 17:17:09.528491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:50.026 #25 NEW cov: 11817 ft: 14351 corp: 14/426b lim: 40 exec/s: 0 rss: 69Mb L: 40/40 MS: 1 CrossOver- 00:07:50.026 [2024-11-09 17:17:09.577822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.026 [2024-11-09 17:17:09.577850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.026 [2024-11-09 17:17:09.577977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.026 [2024-11-09 17:17:09.577994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.026 [2024-11-09 17:17:09.578120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.026 [2024-11-09 17:17:09.578139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.026 [2024-11-09 17:17:09.578259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.026 [2024-11-09 17:17:09.578275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.026 [2024-11-09 17:17:09.578401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:24242404 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.026 [2024-11-09 17:17:09.578422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:50.026 #26 NEW cov: 11817 ft: 14413 corp: 15/466b lim: 40 exec/s: 26 rss: 69Mb L: 40/40 MS: 1 ChangeBit- 00:07:50.026 [2024-11-09 17:17:09.627797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.026 [2024-11-09 17:17:09.627824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.026 [2024-11-09 17:17:09.627949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.026 [2024-11-09 17:17:09.627967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.026 [2024-11-09 17:17:09.628087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.026 [2024-11-09 17:17:09.628108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.026 [2024-11-09 17:17:09.628188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:24242424 cdw11:2424240a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.026 [2024-11-09 17:17:09.628204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.026 #27 NEW cov: 11817 ft: 14435 corp: 16/498b lim: 40 exec/s: 27 rss: 69Mb L: 32/40 MS: 1 EraseBytes- 00:07:50.026 [2024-11-09 17:17:09.667806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.026 [2024-11-09 17:17:09.667834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.026 [2024-11-09 17:17:09.667969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.026 [2024-11-09 17:17:09.667987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.026 [2024-11-09 17:17:09.668110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.026 [2024-11-09 17:17:09.668127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.026 [2024-11-09 17:17:09.668254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:24242424 cdw11:24240a24 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.026 [2024-11-09 17:17:09.668271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.026 [2024-11-09 17:17:09.668358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:24242424 cdw11:24242480 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.026 [2024-11-09 17:17:09.668373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:50.026 #28 NEW cov: 11817 ft: 14481 corp: 17/538b lim: 40 exec/s: 28 rss: 69Mb L: 40/40 MS: 1 CopyPart- 00:07:50.026 [2024-11-09 17:17:09.718052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.026 [2024-11-09 17:17:09.718080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.026 [2024-11-09 17:17:09.718217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.026 [2024-11-09 17:17:09.718234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.026 [2024-11-09 17:17:09.718361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.026 [2024-11-09 17:17:09.718378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.026 [2024-11-09 17:17:09.718495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.026 [2024-11-09 17:17:09.718511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.026 #29 NEW cov: 11817 ft: 14506 corp: 18/573b lim: 40 exec/s: 29 rss: 69Mb L: 35/40 MS: 1 EraseBytes- 00:07:50.026 [2024-11-09 17:17:09.768528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.026 [2024-11-09 17:17:09.768557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.026 [2024-11-09 17:17:09.768697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.026 [2024-11-09 17:17:09.768713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.026 [2024-11-09 17:17:09.768831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:24242424 cdw11:2424247a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.026 [2024-11-09 17:17:09.768849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.026 [2024-11-09 17:17:09.768969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.026 [2024-11-09 17:17:09.768987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.026 [2024-11-09 17:17:09.769111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.026 [2024-11-09 17:17:09.769135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:50.026 #30 NEW cov: 11817 ft: 14529 corp: 19/613b lim: 40 exec/s: 30 rss: 69Mb L: 40/40 MS: 1 ChangeByte- 00:07:50.286 [2024-11-09 17:17:09.818684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.286 [2024-11-09 17:17:09.818711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.286 [2024-11-09 17:17:09.818841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.286 [2024-11-09 17:17:09.818857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.286 [2024-11-09 17:17:09.818980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.286 [2024-11-09 17:17:09.818995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.286 [2024-11-09 17:17:09.819110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.286 [2024-11-09 17:17:09.819126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.286 [2024-11-09 17:17:09.819244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:24242424 cdw11:24242480 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.286 [2024-11-09 17:17:09.819259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:50.286 #31 NEW cov: 11817 ft: 14540 corp: 20/653b lim: 40 exec/s: 31 rss: 69Mb L: 40/40 MS: 1 CopyPart- 00:07:50.286 [2024-11-09 17:17:09.867724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.286 [2024-11-09 17:17:09.867752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.286 #32 NEW cov: 11817 ft: 14575 corp: 21/664b lim: 40 exec/s: 32 rss: 69Mb L: 11/40 MS: 1 EraseBytes- 00:07:50.286 [2024-11-09 17:17:09.917917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.286 [2024-11-09 17:17:09.917949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.286 #33 NEW cov: 11817 ft: 14590 corp: 22/674b lim: 40 exec/s: 33 rss: 69Mb L: 10/40 MS: 1 EraseBytes- 00:07:50.286 [2024-11-09 17:17:09.968585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f20a6100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.286 [2024-11-09 17:17:09.968613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.286 [2024-11-09 17:17:09.968742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.286 [2024-11-09 17:17:09.968760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.286 [2024-11-09 17:17:09.968886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.286 [2024-11-09 17:17:09.968902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.286 #36 NEW cov: 11817 ft: 14832 corp: 23/704b lim: 40 exec/s: 36 rss: 69Mb L: 30/40 MS: 3 InsertByte-InsertByte-InsertRepeatedBytes- 00:07:50.286 [2024-11-09 17:17:10.019860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.286 [2024-11-09 17:17:10.019890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.286 [2024-11-09 17:17:10.020015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.286 [2024-11-09 17:17:10.020032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.286 [2024-11-09 17:17:10.020160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.286 [2024-11-09 17:17:10.020178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.286 #37 NEW cov: 11817 ft: 14946 corp: 24/730b lim: 40 exec/s: 37 rss: 69Mb L: 26/40 MS: 1 EraseBytes- 00:07:50.546 [2024-11-09 17:17:10.069557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.546 [2024-11-09 17:17:10.069586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.546 [2024-11-09 17:17:10.069719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.546 [2024-11-09 17:17:10.069737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.546 [2024-11-09 17:17:10.069864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.546 [2024-11-09 17:17:10.069881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.546 [2024-11-09 17:17:10.070006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.546 [2024-11-09 17:17:10.070024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.546 [2024-11-09 17:17:10.070157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:24242424 cdw11:2424240a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.546 [2024-11-09 17:17:10.070173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:50.546 #38 NEW cov: 11817 ft: 14962 corp: 25/770b lim: 40 exec/s: 38 rss: 69Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:50.546 [2024-11-09 17:17:10.129710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.546 [2024-11-09 17:17:10.129741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.546 [2024-11-09 17:17:10.129864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.546 [2024-11-09 17:17:10.129881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.546 [2024-11-09 17:17:10.130006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.546 [2024-11-09 17:17:10.130023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.546 [2024-11-09 17:17:10.130155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.546 [2024-11-09 17:17:10.130173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.546 [2024-11-09 17:17:10.130305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:24242424 cdw11:2424240a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.546 [2024-11-09 17:17:10.130322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:50.546 #39 NEW cov: 11817 ft: 15023 corp: 26/810b lim: 40 exec/s: 39 rss: 69Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:50.546 [2024-11-09 17:17:10.178659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:2c242424 cdw11:2424240a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.546 [2024-11-09 17:17:10.178688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.546 #40 NEW cov: 11817 ft: 15039 corp: 27/818b lim: 40 exec/s: 40 rss: 69Mb L: 8/40 MS: 1 EraseBytes- 00:07:50.546 [2024-11-09 17:17:10.239882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.546 [2024-11-09 17:17:10.239911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.546 [2024-11-09 17:17:10.240044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.546 [2024-11-09 17:17:10.240060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.546 [2024-11-09 17:17:10.240192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.546 [2024-11-09 17:17:10.240208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.546 [2024-11-09 17:17:10.240348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.546 [2024-11-09 17:17:10.240365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.547 #41 NEW cov: 11817 ft: 15057 corp: 28/857b lim: 40 exec/s: 41 rss: 69Mb L: 39/40 MS: 1 EraseBytes- 00:07:50.547 [2024-11-09 17:17:10.289389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.547 [2024-11-09 17:17:10.289418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.547 [2024-11-09 17:17:10.289554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:24242424 cdw11:24240a24 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.547 [2024-11-09 17:17:10.289572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.806 #47 NEW cov: 11817 ft: 15261 corp: 29/875b lim: 40 exec/s: 47 rss: 69Mb L: 18/40 MS: 1 EraseBytes- 00:07:50.806 [2024-11-09 17:17:10.340432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.806 [2024-11-09 17:17:10.340461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.806 [2024-11-09 17:17:10.340588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.806 [2024-11-09 17:17:10.340605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.806 [2024-11-09 17:17:10.340734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.806 [2024-11-09 17:17:10.340752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.806 [2024-11-09 17:17:10.340873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:24242424 cdw11:24280000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.806 [2024-11-09 17:17:10.340888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.806 [2024-11-09 17:17:10.341010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0024240a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.806 [2024-11-09 17:17:10.341028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:50.806 #48 NEW cov: 11817 ft: 15277 corp: 30/915b lim: 40 exec/s: 48 rss: 69Mb L: 40/40 MS: 1 ChangeBinInt- 00:07:50.806 [2024-11-09 17:17:10.390004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24241b24 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.806 [2024-11-09 17:17:10.390033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.806 [2024-11-09 17:17:10.390167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.806 [2024-11-09 17:17:10.390182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.806 [2024-11-09 17:17:10.390315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.806 [2024-11-09 17:17:10.390333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.806 #49 NEW cov: 11817 ft: 15283 corp: 31/941b lim: 40 exec/s: 49 rss: 69Mb L: 26/40 MS: 1 ChangeBinInt- 00:07:50.806 [2024-11-09 17:17:10.450505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.806 [2024-11-09 17:17:10.450543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.806 [2024-11-09 17:17:10.450678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.806 [2024-11-09 17:17:10.450696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.806 [2024-11-09 17:17:10.450837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.806 [2024-11-09 17:17:10.450855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.806 [2024-11-09 17:17:10.450985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.806 [2024-11-09 17:17:10.451005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.806 #50 NEW cov: 11817 ft: 15287 corp: 32/979b lim: 40 exec/s: 50 rss: 69Mb L: 38/40 MS: 1 CrossOver- 00:07:50.806 [2024-11-09 17:17:10.510957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.806 [2024-11-09 17:17:10.510986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.806 [2024-11-09 17:17:10.511111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.806 [2024-11-09 17:17:10.511127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.806 [2024-11-09 17:17:10.511257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.806 [2024-11-09 17:17:10.511276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.806 [2024-11-09 17:17:10.511407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.806 [2024-11-09 17:17:10.511424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.806 [2024-11-09 17:17:10.511556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:24242424 cdw11:2424240a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.806 [2024-11-09 17:17:10.511574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:50.806 #51 NEW cov: 11817 ft: 15288 corp: 33/1019b lim: 40 exec/s: 51 rss: 69Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:50.807 [2024-11-09 17:17:10.560317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5b5b2424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.807 [2024-11-09 17:17:10.560346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.807 [2024-11-09 17:17:10.560483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.807 [2024-11-09 17:17:10.560501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.807 [2024-11-09 17:17:10.560655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:24242424 cdw11:24242424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.807 [2024-11-09 17:17:10.560673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.066 #54 NEW cov: 11817 ft: 15292 corp: 34/1043b lim: 40 exec/s: 27 rss: 69Mb L: 24/40 MS: 3 ChangeByte-CopyPart-CrossOver- 00:07:51.066 #54 DONE cov: 11817 ft: 15292 corp: 34/1043b lim: 40 exec/s: 27 rss: 69Mb 00:07:51.066 Done 54 runs in 2 second(s) 00:07:51.066 17:17:10 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_12.conf 00:07:51.066 17:17:10 -- ../common.sh@72 -- # (( i++ )) 00:07:51.066 17:17:10 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:51.066 17:17:10 -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:07:51.066 17:17:10 -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:07:51.066 17:17:10 -- nvmf/run.sh@24 -- # local timen=1 00:07:51.066 17:17:10 -- nvmf/run.sh@25 -- # local core=0x1 00:07:51.066 17:17:10 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:51.066 17:17:10 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:07:51.066 17:17:10 -- nvmf/run.sh@29 -- # printf %02d 13 00:07:51.066 17:17:10 -- nvmf/run.sh@29 -- # port=4413 00:07:51.066 17:17:10 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:51.066 17:17:10 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:07:51.066 17:17:10 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:51.066 17:17:10 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 -r /var/tmp/spdk13.sock 00:07:51.066 [2024-11-09 17:17:10.744667] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:51.066 [2024-11-09 17:17:10.744737] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid371614 ] 00:07:51.066 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.325 [2024-11-09 17:17:10.924714] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.325 [2024-11-09 17:17:10.988295] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:51.325 [2024-11-09 17:17:10.988417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.325 [2024-11-09 17:17:11.046200] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.325 [2024-11-09 17:17:11.062551] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:07:51.325 INFO: Running with entropic power schedule (0xFF, 100). 00:07:51.325 INFO: Seed: 3346204975 00:07:51.325 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:51.325 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:51.325 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:51.325 INFO: A corpus is not provided, starting from an empty corpus 00:07:51.325 #2 INITED exec/s: 0 rss: 61Mb 00:07:51.325 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:51.325 This may also happen if the target rejected all inputs we tried so far 00:07:51.584 [2024-11-09 17:17:11.110890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:79797979 cdw11:79797979 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.584 [2024-11-09 17:17:11.110919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.584 [2024-11-09 17:17:11.110976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:79797979 cdw11:79797979 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.584 [2024-11-09 17:17:11.110990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.844 NEW_FUNC[1/669]: 0x44cd38 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:07:51.844 NEW_FUNC[2/669]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:51.844 #5 NEW cov: 11577 ft: 11576 corp: 2/20b lim: 40 exec/s: 0 rss: 68Mb L: 19/19 MS: 3 ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:07:51.844 [2024-11-09 17:17:11.401491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.844 [2024-11-09 17:17:11.401531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.844 NEW_FUNC[1/1]: 0xe94458 in spdk_process_is_primary /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/env.c:290 00:07:51.844 #6 NEW cov: 11691 ft: 12321 corp: 3/29b lim: 40 exec/s: 0 rss: 68Mb L: 9/19 MS: 1 InsertRepeatedBytes- 00:07:51.844 [2024-11-09 17:17:11.441628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:79797983 cdw11:86868679 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.844 [2024-11-09 17:17:11.441654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.844 [2024-11-09 17:17:11.441729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:79797979 cdw11:79797979 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.844 [2024-11-09 17:17:11.441743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.844 #7 NEW cov: 11697 ft: 12610 corp: 4/48b lim: 40 exec/s: 0 rss: 68Mb L: 19/19 MS: 1 ChangeBinInt- 00:07:51.844 [2024-11-09 17:17:11.481637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.844 [2024-11-09 17:17:11.481664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.844 #8 NEW cov: 11782 ft: 12814 corp: 5/61b lim: 40 exec/s: 0 rss: 69Mb L: 13/19 MS: 1 CrossOver- 00:07:51.844 [2024-11-09 17:17:11.521884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:79797979 cdw11:795b7979 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.844 [2024-11-09 17:17:11.521911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.844 [2024-11-09 17:17:11.521966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:79797979 cdw11:79797979 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.844 [2024-11-09 17:17:11.521980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.844 #14 NEW cov: 11782 ft: 12979 corp: 6/80b lim: 40 exec/s: 0 rss: 69Mb L: 19/19 MS: 1 ChangeByte- 00:07:51.844 [2024-11-09 17:17:11.561842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:eb000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.844 [2024-11-09 17:17:11.561868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.844 #16 NEW cov: 11782 ft: 13069 corp: 7/91b lim: 40 exec/s: 0 rss: 69Mb L: 11/19 MS: 2 InsertByte-CrossOver- 00:07:51.844 [2024-11-09 17:17:11.602181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:79797931 cdw11:75000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.844 [2024-11-09 17:17:11.602207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.844 [2024-11-09 17:17:11.602261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000079 cdw11:795b7979 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.844 [2024-11-09 17:17:11.602275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.844 [2024-11-09 17:17:11.602333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:79797979 cdw11:79797979 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.844 [2024-11-09 17:17:11.602346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.104 #17 NEW cov: 11782 ft: 13374 corp: 8/118b lim: 40 exec/s: 0 rss: 69Mb L: 27/27 MS: 1 CMP- DE: "1u\000\000\000\000\000\000"- 00:07:52.104 [2024-11-09 17:17:11.642065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.104 [2024-11-09 17:17:11.642090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.104 #18 NEW cov: 11782 ft: 13424 corp: 9/131b lim: 40 exec/s: 0 rss: 69Mb L: 13/27 MS: 1 ChangeBinInt- 00:07:52.104 [2024-11-09 17:17:11.682201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00020004 cdw11:002b000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.104 [2024-11-09 17:17:11.682227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.104 #22 NEW cov: 11782 ft: 13453 corp: 10/139b lim: 40 exec/s: 0 rss: 69Mb L: 8/27 MS: 4 EraseBytes-ChangeBit-ChangeBit-InsertByte- 00:07:52.104 [2024-11-09 17:17:11.722275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:000000f9 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.104 [2024-11-09 17:17:11.722300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.104 #28 NEW cov: 11782 ft: 13515 corp: 11/152b lim: 40 exec/s: 0 rss: 69Mb L: 13/27 MS: 1 ChangeBinInt- 00:07:52.104 [2024-11-09 17:17:11.762635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:79797931 cdw11:75000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.104 [2024-11-09 17:17:11.762662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.104 [2024-11-09 17:17:11.762717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000079 cdw11:795b0a79 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.104 [2024-11-09 17:17:11.762731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.104 [2024-11-09 17:17:11.762786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:79797979 cdw11:79797979 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.104 [2024-11-09 17:17:11.762800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.104 #29 NEW cov: 11782 ft: 13571 corp: 12/180b lim: 40 exec/s: 0 rss: 69Mb L: 28/28 MS: 1 CrossOver- 00:07:52.104 [2024-11-09 17:17:11.802509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.104 [2024-11-09 17:17:11.802544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.104 #30 NEW cov: 11782 ft: 13627 corp: 13/193b lim: 40 exec/s: 0 rss: 69Mb L: 13/28 MS: 1 CMP- DE: "\000\000\000\000\000\000\004\000"- 00:07:52.104 [2024-11-09 17:17:11.832691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.104 [2024-11-09 17:17:11.832716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.104 [2024-11-09 17:17:11.832770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.104 [2024-11-09 17:17:11.832786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.104 #33 NEW cov: 11782 ft: 13731 corp: 14/211b lim: 40 exec/s: 0 rss: 69Mb L: 18/28 MS: 3 ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:07:52.363 [2024-11-09 17:17:11.873071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:79797931 cdw11:75000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.363 [2024-11-09 17:17:11.873098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.364 [2024-11-09 17:17:11.873154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000079 cdw11:795b0a79 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-11-09 17:17:11.873168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.364 [2024-11-09 17:17:11.873223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:79797979 cdw11:79797900 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-11-09 17:17:11.873237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.364 [2024-11-09 17:17:11.873290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000079 cdw11:79797908 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-11-09 17:17:11.873302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.364 #34 NEW cov: 11782 ft: 14210 corp: 15/243b lim: 40 exec/s: 0 rss: 69Mb L: 32/32 MS: 1 CrossOver- 00:07:52.364 [2024-11-09 17:17:11.912933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:79797979 cdw11:79797979 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-11-09 17:17:11.912959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.364 [2024-11-09 17:17:11.913032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:79797979 cdw11:795c7979 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-11-09 17:17:11.913045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.364 #40 NEW cov: 11782 ft: 14217 corp: 16/262b lim: 40 exec/s: 0 rss: 69Mb L: 19/32 MS: 1 ChangeByte- 00:07:52.364 [2024-11-09 17:17:11.953166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:79797931 cdw11:75000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-11-09 17:17:11.953191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.364 [2024-11-09 17:17:11.953247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000079 cdw11:795b0a79 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-11-09 17:17:11.953261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.364 [2024-11-09 17:17:11.953315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:79797900 cdw11:04000079 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-11-09 17:17:11.953344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.364 #41 NEW cov: 11782 ft: 14243 corp: 17/290b lim: 40 exec/s: 0 rss: 69Mb L: 28/32 MS: 1 CrossOver- 00:07:52.364 [2024-11-09 17:17:11.993042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000f900 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-11-09 17:17:11.993067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.364 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:52.364 #42 NEW cov: 11805 ft: 14303 corp: 18/303b lim: 40 exec/s: 0 rss: 69Mb L: 13/32 MS: 1 ShuffleBytes- 00:07:52.364 [2024-11-09 17:17:12.033282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-11-09 17:17:12.033307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.364 [2024-11-09 17:17:12.033363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00020004 cdw11:002b000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-11-09 17:17:12.033376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.364 #43 NEW cov: 11805 ft: 14364 corp: 19/319b lim: 40 exec/s: 0 rss: 69Mb L: 16/32 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:07:52.364 [2024-11-09 17:17:12.073682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:79797931 cdw11:75000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-11-09 17:17:12.073707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.364 [2024-11-09 17:17:12.073781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000079 cdw11:795b0a79 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-11-09 17:17:12.073794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.364 [2024-11-09 17:17:12.073849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:79797979 cdw11:5b797979 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-11-09 17:17:12.073862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.364 [2024-11-09 17:17:12.073914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:79790000 cdw11:00007979 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-11-09 17:17:12.073927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.364 #44 NEW cov: 11805 ft: 14384 corp: 20/354b lim: 40 exec/s: 44 rss: 69Mb L: 35/35 MS: 1 CrossOver- 00:07:52.364 [2024-11-09 17:17:12.113529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:01005d00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-11-09 17:17:12.113553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.364 [2024-11-09 17:17:12.113624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000200 cdw11:04002b00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-11-09 17:17:12.113638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.623 #45 NEW cov: 11805 ft: 14405 corp: 21/371b lim: 40 exec/s: 45 rss: 70Mb L: 17/35 MS: 1 InsertByte- 00:07:52.623 [2024-11-09 17:17:12.153561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.624 [2024-11-09 17:17:12.153586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.624 #46 NEW cov: 11805 ft: 14419 corp: 22/384b lim: 40 exec/s: 46 rss: 70Mb L: 13/35 MS: 1 ShuffleBytes- 00:07:52.624 [2024-11-09 17:17:12.193790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:79797979 cdw11:79797979 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.624 [2024-11-09 17:17:12.193818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.624 [2024-11-09 17:17:12.193874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:79797979 cdw11:79797979 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.624 [2024-11-09 17:17:12.193887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.624 #47 NEW cov: 11805 ft: 14460 corp: 23/401b lim: 40 exec/s: 47 rss: 70Mb L: 17/35 MS: 1 EraseBytes- 00:07:52.624 [2024-11-09 17:17:12.233883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:000000ff cdw11:83bfddd0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.624 [2024-11-09 17:17:12.233908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.624 [2024-11-09 17:17:12.233965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:986c9a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.624 [2024-11-09 17:17:12.233978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.624 #48 NEW cov: 11805 ft: 14515 corp: 24/422b lim: 40 exec/s: 48 rss: 70Mb L: 21/35 MS: 1 CMP- DE: "\377\203\277\335\320\230l\232"- 00:07:52.624 [2024-11-09 17:17:12.273884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0e000000 cdw11:0000f900 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.624 [2024-11-09 17:17:12.273909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.624 #49 NEW cov: 11805 ft: 14536 corp: 25/435b lim: 40 exec/s: 49 rss: 70Mb L: 13/35 MS: 1 CMP- DE: "\016\000\000\000"- 00:07:52.624 [2024-11-09 17:17:12.314364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.624 [2024-11-09 17:17:12.314389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.624 [2024-11-09 17:17:12.314459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.624 [2024-11-09 17:17:12.314473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.624 [2024-11-09 17:17:12.314533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.624 [2024-11-09 17:17:12.314547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.624 [2024-11-09 17:17:12.314611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.624 [2024-11-09 17:17:12.314624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.624 #50 NEW cov: 11805 ft: 14553 corp: 26/467b lim: 40 exec/s: 50 rss: 70Mb L: 32/35 MS: 1 InsertRepeatedBytes- 00:07:52.624 [2024-11-09 17:17:12.354426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.624 [2024-11-09 17:17:12.354451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.624 [2024-11-09 17:17:12.354527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.624 [2024-11-09 17:17:12.354541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.624 [2024-11-09 17:17:12.354599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.624 [2024-11-09 17:17:12.354612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.624 [2024-11-09 17:17:12.354668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:81ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.624 [2024-11-09 17:17:12.354681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.624 #51 NEW cov: 11805 ft: 14628 corp: 27/500b lim: 40 exec/s: 51 rss: 70Mb L: 33/35 MS: 1 InsertByte- 00:07:52.884 [2024-11-09 17:17:12.394377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:000000ff cdw11:83bfddd0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.884 [2024-11-09 17:17:12.394403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.884 [2024-11-09 17:17:12.394458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:986c9a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.884 [2024-11-09 17:17:12.394471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.884 #52 NEW cov: 11805 ft: 14643 corp: 28/519b lim: 40 exec/s: 52 rss: 70Mb L: 19/35 MS: 1 EraseBytes- 00:07:52.884 [2024-11-09 17:17:12.434468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:a8a8a800 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.884 [2024-11-09 17:17:12.434492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.884 [2024-11-09 17:17:12.434552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.884 [2024-11-09 17:17:12.434565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.884 #53 NEW cov: 11805 ft: 14646 corp: 29/535b lim: 40 exec/s: 53 rss: 70Mb L: 16/35 MS: 1 InsertRepeatedBytes- 00:07:52.884 [2024-11-09 17:17:12.474600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:79797983 cdw11:86868679 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.884 [2024-11-09 17:17:12.474625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.884 [2024-11-09 17:17:12.474682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:79797979 cdw11:79797979 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.884 [2024-11-09 17:17:12.474695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.884 #54 NEW cov: 11805 ft: 14673 corp: 30/557b lim: 40 exec/s: 54 rss: 70Mb L: 22/35 MS: 1 CrossOver- 00:07:52.884 [2024-11-09 17:17:12.514709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:33333333 cdw11:33333333 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.884 [2024-11-09 17:17:12.514734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.884 [2024-11-09 17:17:12.514806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:33333333 cdw11:33333333 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.884 [2024-11-09 17:17:12.514820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.884 #58 NEW cov: 11805 ft: 14690 corp: 31/575b lim: 40 exec/s: 58 rss: 70Mb L: 18/35 MS: 4 ShuffleBytes-CrossOver-ChangeByte-InsertRepeatedBytes- 00:07:52.884 [2024-11-09 17:17:12.544648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00022d00 cdw11:04002b00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.884 [2024-11-09 17:17:12.544672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.884 #59 NEW cov: 11805 ft: 14697 corp: 32/584b lim: 40 exec/s: 59 rss: 70Mb L: 9/35 MS: 1 InsertByte- 00:07:52.884 [2024-11-09 17:17:12.584798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:20000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.884 [2024-11-09 17:17:12.584823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.884 #60 NEW cov: 11805 ft: 14711 corp: 33/597b lim: 40 exec/s: 60 rss: 70Mb L: 13/35 MS: 1 ChangeBit- 00:07:52.884 [2024-11-09 17:17:12.625015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00022d00 cdw11:02000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.884 [2024-11-09 17:17:12.625040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.884 [2024-11-09 17:17:12.625094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:04002b00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.884 [2024-11-09 17:17:12.625108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.884 #61 NEW cov: 11805 ft: 14736 corp: 34/614b lim: 40 exec/s: 61 rss: 70Mb L: 17/35 MS: 1 CMP- DE: "\002\000\000\000\000\000\000\000"- 00:07:53.144 [2024-11-09 17:17:12.665370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:79797931 cdw11:75000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.144 [2024-11-09 17:17:12.665395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.144 [2024-11-09 17:17:12.665452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000079 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.144 [2024-11-09 17:17:12.665465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.144 [2024-11-09 17:17:12.665519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:795b7979 cdw11:0079795b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.144 [2024-11-09 17:17:12.665536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.144 [2024-11-09 17:17:12.665590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:79797979 cdw11:79797979 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.144 [2024-11-09 17:17:12.665603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.144 #62 NEW cov: 11805 ft: 14739 corp: 35/651b lim: 40 exec/s: 62 rss: 70Mb L: 37/37 MS: 1 CopyPart- 00:07:53.144 [2024-11-09 17:17:12.705118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:79797979 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.144 [2024-11-09 17:17:12.705143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.144 #63 NEW cov: 11805 ft: 14756 corp: 36/660b lim: 40 exec/s: 63 rss: 70Mb L: 9/37 MS: 1 CrossOver- 00:07:53.144 [2024-11-09 17:17:12.745273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:79797979 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.144 [2024-11-09 17:17:12.745296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.144 #64 NEW cov: 11805 ft: 14765 corp: 37/669b lim: 40 exec/s: 64 rss: 70Mb L: 9/37 MS: 1 CrossOver- 00:07:53.144 [2024-11-09 17:17:12.785615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.144 [2024-11-09 17:17:12.785640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.144 [2024-11-09 17:17:12.785713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0000000c cdw11:0c0c0c0c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.144 [2024-11-09 17:17:12.785727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.144 [2024-11-09 17:17:12.785780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:0c0c0c0c cdw11:0c0c0c0c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.144 [2024-11-09 17:17:12.785793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.144 #65 NEW cov: 11805 ft: 14771 corp: 38/700b lim: 40 exec/s: 65 rss: 70Mb L: 31/37 MS: 1 InsertRepeatedBytes- 00:07:53.144 [2024-11-09 17:17:12.825623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.144 [2024-11-09 17:17:12.825647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.144 [2024-11-09 17:17:12.825718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.144 [2024-11-09 17:17:12.825732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.144 #66 NEW cov: 11805 ft: 14785 corp: 39/718b lim: 40 exec/s: 66 rss: 70Mb L: 18/37 MS: 1 CopyPart- 00:07:53.144 [2024-11-09 17:17:12.865603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00f90004 cdw11:002b000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.144 [2024-11-09 17:17:12.865628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.144 #67 NEW cov: 11805 ft: 14795 corp: 40/726b lim: 40 exec/s: 67 rss: 70Mb L: 8/37 MS: 1 ChangeBinInt- 00:07:53.144 [2024-11-09 17:17:12.905840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.144 [2024-11-09 17:17:12.905865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.144 [2024-11-09 17:17:12.905920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.144 [2024-11-09 17:17:12.905934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.404 #68 NEW cov: 11805 ft: 14801 corp: 41/743b lim: 40 exec/s: 68 rss: 70Mb L: 17/37 MS: 1 PersAutoDict- DE: "\016\000\000\000"- 00:07:53.404 [2024-11-09 17:17:12.935812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:86797979 cdw11:79797979 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.404 [2024-11-09 17:17:12.935837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.404 #69 NEW cov: 11805 ft: 14857 corp: 42/756b lim: 40 exec/s: 69 rss: 70Mb L: 13/37 MS: 1 EraseBytes- 00:07:53.404 [2024-11-09 17:17:12.976275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:79797931 cdw11:75000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.404 [2024-11-09 17:17:12.976301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.404 [2024-11-09 17:17:12.976360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000079 cdw11:795b0a79 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.404 [2024-11-09 17:17:12.976373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.404 [2024-11-09 17:17:12.976426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:79797979 cdw11:79797900 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.404 [2024-11-09 17:17:12.976439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.404 [2024-11-09 17:17:12.976492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000179 cdw11:79797908 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.404 [2024-11-09 17:17:12.976505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.404 #70 NEW cov: 11805 ft: 14868 corp: 43/788b lim: 40 exec/s: 70 rss: 70Mb L: 32/37 MS: 1 ChangeBit- 00:07:53.404 [2024-11-09 17:17:13.016057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.404 [2024-11-09 17:17:13.016082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.404 #71 NEW cov: 11805 ft: 14876 corp: 44/797b lim: 40 exec/s: 71 rss: 70Mb L: 9/37 MS: 1 ChangeBinInt- 00:07:53.404 [2024-11-09 17:17:13.056165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:79797979 cdw11:00000079 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.404 [2024-11-09 17:17:13.056192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.404 #72 NEW cov: 11805 ft: 14896 corp: 45/805b lim: 40 exec/s: 72 rss: 70Mb L: 8/37 MS: 1 EraseBytes- 00:07:53.404 [2024-11-09 17:17:13.096442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:000000cd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.404 [2024-11-09 17:17:13.096468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.404 [2024-11-09 17:17:13.096528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:cdcdcd00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.404 [2024-11-09 17:17:13.096541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.404 #73 NEW cov: 11805 ft: 14903 corp: 46/827b lim: 40 exec/s: 36 rss: 70Mb L: 22/37 MS: 1 InsertRepeatedBytes- 00:07:53.404 #73 DONE cov: 11805 ft: 14903 corp: 46/827b lim: 40 exec/s: 36 rss: 70Mb 00:07:53.404 ###### Recommended dictionary. ###### 00:07:53.404 "1u\000\000\000\000\000\000" # Uses: 0 00:07:53.404 "\000\000\000\000\000\000\004\000" # Uses: 0 00:07:53.404 "\001\000\000\000\000\000\000\000" # Uses: 0 00:07:53.404 "\377\203\277\335\320\230l\232" # Uses: 0 00:07:53.404 "\016\000\000\000" # Uses: 1 00:07:53.404 "\002\000\000\000\000\000\000\000" # Uses: 0 00:07:53.404 ###### End of recommended dictionary. ###### 00:07:53.404 Done 73 runs in 2 second(s) 00:07:53.664 17:17:13 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_13.conf 00:07:53.664 17:17:13 -- ../common.sh@72 -- # (( i++ )) 00:07:53.664 17:17:13 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:53.664 17:17:13 -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:07:53.664 17:17:13 -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:07:53.664 17:17:13 -- nvmf/run.sh@24 -- # local timen=1 00:07:53.664 17:17:13 -- nvmf/run.sh@25 -- # local core=0x1 00:07:53.664 17:17:13 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:53.664 17:17:13 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:07:53.664 17:17:13 -- nvmf/run.sh@29 -- # printf %02d 14 00:07:53.664 17:17:13 -- nvmf/run.sh@29 -- # port=4414 00:07:53.664 17:17:13 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:53.664 17:17:13 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:07:53.664 17:17:13 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:53.664 17:17:13 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 -r /var/tmp/spdk14.sock 00:07:53.664 [2024-11-09 17:17:13.289945] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:53.664 [2024-11-09 17:17:13.290014] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid372024 ] 00:07:53.664 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.924 [2024-11-09 17:17:13.465096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.924 [2024-11-09 17:17:13.529160] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:53.924 [2024-11-09 17:17:13.529301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.924 [2024-11-09 17:17:13.587183] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.924 [2024-11-09 17:17:13.603550] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:07:53.924 INFO: Running with entropic power schedule (0xFF, 100). 00:07:53.924 INFO: Seed: 1594262907 00:07:53.924 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:53.924 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:53.924 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:53.924 INFO: A corpus is not provided, starting from an empty corpus 00:07:53.924 #2 INITED exec/s: 0 rss: 61Mb 00:07:53.924 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:53.924 This may also happen if the target rejected all inputs we tried so far 00:07:53.924 [2024-11-09 17:17:13.652567] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.924 [2024-11-09 17:17:13.652599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.924 [2024-11-09 17:17:13.652670] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.924 [2024-11-09 17:17:13.652687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.924 [2024-11-09 17:17:13.652742] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.924 [2024-11-09 17:17:13.652758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.924 [2024-11-09 17:17:13.652814] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.924 [2024-11-09 17:17:13.652828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.924 [2024-11-09 17:17:13.652883] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.924 [2024-11-09 17:17:13.652899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.184 NEW_FUNC[1/672]: 0x44e908 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:07:54.184 NEW_FUNC[2/672]: 0x46fd38 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:54.184 #6 NEW cov: 11580 ft: 11580 corp: 2/36b lim: 35 exec/s: 0 rss: 68Mb L: 35/35 MS: 4 ChangeByte-ChangeByte-CrossOver-InsertRepeatedBytes- 00:07:54.443 [2024-11-09 17:17:13.953268] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.443 [2024-11-09 17:17:13.953304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.443 [2024-11-09 17:17:13.953365] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.443 [2024-11-09 17:17:13.953383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.443 [2024-11-09 17:17:13.953443] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.443 [2024-11-09 17:17:13.953459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.443 [2024-11-09 17:17:13.953519] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.443 [2024-11-09 17:17:13.953536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.443 [2024-11-09 17:17:13.953557] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.443 [2024-11-09 17:17:13.953573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.443 #17 NEW cov: 11695 ft: 12017 corp: 3/71b lim: 35 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:54.443 [2024-11-09 17:17:14.003278] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.443 [2024-11-09 17:17:14.003306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.443 [2024-11-09 17:17:14.003362] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.444 [2024-11-09 17:17:14.003377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.444 [2024-11-09 17:17:14.003433] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.444 [2024-11-09 17:17:14.003448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.444 [2024-11-09 17:17:14.003502] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.444 [2024-11-09 17:17:14.003517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.444 [2024-11-09 17:17:14.003577] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.444 [2024-11-09 17:17:14.003593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.444 #18 NEW cov: 11701 ft: 12189 corp: 4/106b lim: 35 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:54.444 [2024-11-09 17:17:14.043364] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.444 [2024-11-09 17:17:14.043391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.444 [2024-11-09 17:17:14.043452] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000027 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.444 [2024-11-09 17:17:14.043468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.444 [2024-11-09 17:17:14.043527] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.444 [2024-11-09 17:17:14.043542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.444 [2024-11-09 17:17:14.043595] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.444 [2024-11-09 17:17:14.043609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.444 [2024-11-09 17:17:14.043667] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.444 [2024-11-09 17:17:14.043682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.444 #19 NEW cov: 11786 ft: 12440 corp: 5/141b lim: 35 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 ChangeByte- 00:07:54.444 [2024-11-09 17:17:14.083502] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.444 [2024-11-09 17:17:14.083538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.444 [2024-11-09 17:17:14.083596] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000027 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.444 [2024-11-09 17:17:14.083611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.444 [2024-11-09 17:17:14.083667] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.444 [2024-11-09 17:17:14.083681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.444 [2024-11-09 17:17:14.083736] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.444 [2024-11-09 17:17:14.083751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.444 [2024-11-09 17:17:14.083807] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.444 [2024-11-09 17:17:14.083822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.444 #20 NEW cov: 11786 ft: 12540 corp: 6/176b lim: 35 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:54.444 [2024-11-09 17:17:14.123596] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.444 [2024-11-09 17:17:14.123624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.444 [2024-11-09 17:17:14.123697] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.444 [2024-11-09 17:17:14.123712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.444 [2024-11-09 17:17:14.123767] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.444 [2024-11-09 17:17:14.123782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.444 [2024-11-09 17:17:14.123841] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.444 [2024-11-09 17:17:14.123857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.444 [2024-11-09 17:17:14.123913] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.444 [2024-11-09 17:17:14.123928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.444 #21 NEW cov: 11786 ft: 12722 corp: 7/211b lim: 35 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 CopyPart- 00:07:54.444 [2024-11-09 17:17:14.163696] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.444 [2024-11-09 17:17:14.163722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.444 [2024-11-09 17:17:14.163793] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000bf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.444 [2024-11-09 17:17:14.163808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.444 [2024-11-09 17:17:14.163865] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.444 [2024-11-09 17:17:14.163879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.444 [2024-11-09 17:17:14.163935] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.444 [2024-11-09 17:17:14.163949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.444 [2024-11-09 17:17:14.164006] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.444 [2024-11-09 17:17:14.164021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.444 #22 NEW cov: 11786 ft: 12785 corp: 8/246b lim: 35 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 CMP- DE: "\201\031\207\360\336\277\204\000"- 00:07:54.444 [2024-11-09 17:17:14.203843] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.444 [2024-11-09 17:17:14.203887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.444 [2024-11-09 17:17:14.203945] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.444 [2024-11-09 17:17:14.203961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.444 [2024-11-09 17:17:14.204019] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.444 [2024-11-09 17:17:14.204033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.444 [2024-11-09 17:17:14.204090] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.444 [2024-11-09 17:17:14.204104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.444 [2024-11-09 17:17:14.204160] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.444 [2024-11-09 17:17:14.204177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.704 #23 NEW cov: 11786 ft: 12836 corp: 9/281b lim: 35 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 CMP- DE: "\000\000"- 00:07:54.704 [2024-11-09 17:17:14.243979] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.704 [2024-11-09 17:17:14.244006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.704 [2024-11-09 17:17:14.244076] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.704 [2024-11-09 17:17:14.244091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.704 [2024-11-09 17:17:14.244147] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.704 [2024-11-09 17:17:14.244162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.704 [2024-11-09 17:17:14.244217] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.704 [2024-11-09 17:17:14.244232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.704 [2024-11-09 17:17:14.244287] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000f0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.704 [2024-11-09 17:17:14.244302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.704 #24 NEW cov: 11786 ft: 12849 corp: 10/316b lim: 35 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 PersAutoDict- DE: "\201\031\207\360\336\277\204\000"- 00:07:54.704 [2024-11-09 17:17:14.283913] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.704 [2024-11-09 17:17:14.283938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.704 [2024-11-09 17:17:14.284011] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.704 [2024-11-09 17:17:14.284026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.704 [2024-11-09 17:17:14.284083] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.704 [2024-11-09 17:17:14.284097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.704 [2024-11-09 17:17:14.284154] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.704 [2024-11-09 17:17:14.284168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.704 #25 NEW cov: 11786 ft: 13014 corp: 11/350b lim: 35 exec/s: 0 rss: 68Mb L: 34/35 MS: 1 EraseBytes- 00:07:54.704 [2024-11-09 17:17:14.324169] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.704 [2024-11-09 17:17:14.324196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.704 [2024-11-09 17:17:14.324266] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000027 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.704 [2024-11-09 17:17:14.324281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.704 [2024-11-09 17:17:14.324343] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.704 [2024-11-09 17:17:14.324358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.704 [2024-11-09 17:17:14.324415] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.704 [2024-11-09 17:17:14.324430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.704 [2024-11-09 17:17:14.324486] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.704 [2024-11-09 17:17:14.324501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.704 #26 NEW cov: 11786 ft: 13093 corp: 12/385b lim: 35 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 ChangeByte- 00:07:54.704 [2024-11-09 17:17:14.364281] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.704 [2024-11-09 17:17:14.364308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.704 [2024-11-09 17:17:14.364380] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.704 [2024-11-09 17:17:14.364395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.704 [2024-11-09 17:17:14.364449] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.704 [2024-11-09 17:17:14.364465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.704 [2024-11-09 17:17:14.364519] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000bf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.704 [2024-11-09 17:17:14.364539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.704 [2024-11-09 17:17:14.364595] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.704 [2024-11-09 17:17:14.364609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.704 #27 NEW cov: 11786 ft: 13137 corp: 13/420b lim: 35 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 CMP- DE: "\377\203\277\337\015I\261\204"- 00:07:54.704 [2024-11-09 17:17:14.404160] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.704 [2024-11-09 17:17:14.404187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.704 [2024-11-09 17:17:14.404259] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ad SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.704 [2024-11-09 17:17:14.404275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.704 NEW_FUNC[1/1]: 0x1133ce8 in nvmf_ctrlr_set_features_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1651 00:07:54.704 #28 NEW cov: 11809 ft: 13494 corp: 14/442b lim: 35 exec/s: 0 rss: 68Mb L: 22/35 MS: 1 CrossOver- 00:07:54.704 [2024-11-09 17:17:14.454486] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.704 [2024-11-09 17:17:14.454514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.704 [2024-11-09 17:17:14.454579] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.704 [2024-11-09 17:17:14.454593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.704 [2024-11-09 17:17:14.454665] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.704 [2024-11-09 17:17:14.454681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.964 #29 NEW cov: 11816 ft: 13639 corp: 15/472b lim: 35 exec/s: 0 rss: 68Mb L: 30/35 MS: 1 PersAutoDict- DE: "\201\031\207\360\336\277\204\000"- 00:07:54.964 [2024-11-09 17:17:14.494681] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.964 [2024-11-09 17:17:14.494710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.964 [2024-11-09 17:17:14.494778] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.964 [2024-11-09 17:17:14.494794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.964 [2024-11-09 17:17:14.494850] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.964 [2024-11-09 17:17:14.494865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.964 [2024-11-09 17:17:14.494921] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.964 [2024-11-09 17:17:14.494935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.964 [2024-11-09 17:17:14.494990] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.964 [2024-11-09 17:17:14.495005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.964 #30 NEW cov: 11816 ft: 13659 corp: 16/507b lim: 35 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 PersAutoDict- DE: "\000\000"- 00:07:54.964 [2024-11-09 17:17:14.534676] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.964 [2024-11-09 17:17:14.534703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.964 [2024-11-09 17:17:14.534767] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.964 [2024-11-09 17:17:14.534783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.964 [2024-11-09 17:17:14.534839] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.964 [2024-11-09 17:17:14.534854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.964 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:54.964 #31 NEW cov: 11839 ft: 13718 corp: 17/540b lim: 35 exec/s: 0 rss: 68Mb L: 33/35 MS: 1 CrossOver- 00:07:54.964 [2024-11-09 17:17:14.574880] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.964 [2024-11-09 17:17:14.574909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.964 [2024-11-09 17:17:14.574983] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.964 [2024-11-09 17:17:14.574999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.964 [2024-11-09 17:17:14.575055] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.964 [2024-11-09 17:17:14.575070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.964 [2024-11-09 17:17:14.575125] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.964 [2024-11-09 17:17:14.575139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.965 [2024-11-09 17:17:14.575194] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.965 [2024-11-09 17:17:14.575208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.965 #32 NEW cov: 11839 ft: 13748 corp: 18/575b lim: 35 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 CopyPart- 00:07:54.965 [2024-11-09 17:17:14.614919] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.965 [2024-11-09 17:17:14.614946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.965 [2024-11-09 17:17:14.615006] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.965 [2024-11-09 17:17:14.615021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.965 [2024-11-09 17:17:14.615062] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:000000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.965 [2024-11-09 17:17:14.615075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.965 [2024-11-09 17:17:14.615129] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.965 [2024-11-09 17:17:14.615144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.965 #33 NEW cov: 11839 ft: 13782 corp: 19/609b lim: 35 exec/s: 33 rss: 69Mb L: 34/35 MS: 1 PersAutoDict- DE: "\000\000"- 00:07:54.965 [2024-11-09 17:17:14.655227] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.965 [2024-11-09 17:17:14.655256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.965 [2024-11-09 17:17:14.655328] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.965 [2024-11-09 17:17:14.655344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.965 [2024-11-09 17:17:14.655399] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000bf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.965 [2024-11-09 17:17:14.655413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.965 [2024-11-09 17:17:14.655466] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.965 [2024-11-09 17:17:14.655484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.965 #34 NEW cov: 11839 ft: 13872 corp: 20/644b lim: 35 exec/s: 34 rss: 69Mb L: 35/35 MS: 1 CopyPart- 00:07:54.965 [2024-11-09 17:17:14.694995] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.965 [2024-11-09 17:17:14.695021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.965 [2024-11-09 17:17:14.695092] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ad SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.965 [2024-11-09 17:17:14.695108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.965 #35 NEW cov: 11839 ft: 13896 corp: 21/666b lim: 35 exec/s: 35 rss: 69Mb L: 22/35 MS: 1 ChangeBit- 00:07:55.224 [2024-11-09 17:17:14.735332] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.225 [2024-11-09 17:17:14.735360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.225 [2024-11-09 17:17:14.735432] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000087 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.225 [2024-11-09 17:17:14.735446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.225 [2024-11-09 17:17:14.735502] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.225 [2024-11-09 17:17:14.735518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.225 #36 NEW cov: 11839 ft: 13933 corp: 22/700b lim: 35 exec/s: 36 rss: 69Mb L: 34/35 MS: 1 InsertRepeatedBytes- 00:07:55.225 [2024-11-09 17:17:14.775377] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.225 [2024-11-09 17:17:14.775404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.225 [2024-11-09 17:17:14.775477] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.225 [2024-11-09 17:17:14.775493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.225 [2024-11-09 17:17:14.775555] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:000000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.225 [2024-11-09 17:17:14.775570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.225 [2024-11-09 17:17:14.775623] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.225 [2024-11-09 17:17:14.775639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.225 #37 NEW cov: 11839 ft: 13944 corp: 23/734b lim: 35 exec/s: 37 rss: 69Mb L: 34/35 MS: 1 ShuffleBytes- 00:07:55.225 [2024-11-09 17:17:14.815341] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.225 [2024-11-09 17:17:14.815367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.225 [2024-11-09 17:17:14.815437] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ad SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.225 [2024-11-09 17:17:14.815452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.225 #38 NEW cov: 11839 ft: 13956 corp: 24/756b lim: 35 exec/s: 38 rss: 69Mb L: 22/35 MS: 1 ChangeBinInt- 00:07:55.225 [2024-11-09 17:17:14.855754] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.225 [2024-11-09 17:17:14.855783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.225 [2024-11-09 17:17:14.855854] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.225 [2024-11-09 17:17:14.855870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.225 [2024-11-09 17:17:14.855924] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.225 [2024-11-09 17:17:14.855938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.225 [2024-11-09 17:17:14.855994] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.225 [2024-11-09 17:17:14.856010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.225 [2024-11-09 17:17:14.856065] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.225 [2024-11-09 17:17:14.856080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.225 #39 NEW cov: 11839 ft: 13986 corp: 25/791b lim: 35 exec/s: 39 rss: 69Mb L: 35/35 MS: 1 CopyPart- 00:07:55.225 [2024-11-09 17:17:14.895854] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.225 [2024-11-09 17:17:14.895882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.225 [2024-11-09 17:17:14.895954] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.225 [2024-11-09 17:17:14.895970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.225 [2024-11-09 17:17:14.896027] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.225 [2024-11-09 17:17:14.896042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.225 [2024-11-09 17:17:14.896098] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.225 [2024-11-09 17:17:14.896113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.225 [2024-11-09 17:17:14.896169] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.225 [2024-11-09 17:17:14.896184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.225 #40 NEW cov: 11839 ft: 13995 corp: 26/826b lim: 35 exec/s: 40 rss: 69Mb L: 35/35 MS: 1 ChangeByte- 00:07:55.225 [2024-11-09 17:17:14.925823] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.225 [2024-11-09 17:17:14.925850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.225 [2024-11-09 17:17:14.925920] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000087 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.225 [2024-11-09 17:17:14.925938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.225 [2024-11-09 17:17:14.925991] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ERROR_RECOVERY cid:7 cdw10:80000005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.225 [2024-11-09 17:17:14.926005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.225 NEW_FUNC[1/1]: 0x46c698 in feat_error_recover /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:304 00:07:55.225 #41 NEW cov: 11862 ft: 14019 corp: 27/860b lim: 35 exec/s: 41 rss: 69Mb L: 34/35 MS: 1 CMP- DE: "\001\005"- 00:07:55.225 [2024-11-09 17:17:14.965511] ctrlr.c:1668:nvmf_ctrlr_set_features_host_identifier: *ERROR*: Set Features - Host Identifier not allowed 00:07:55.225 [2024-11-09 17:17:14.966140] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST IDENTIFIER cid:5 cdw10:00000081 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.225 [2024-11-09 17:17:14.966169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: COMMAND SEQUENCE ERROR (00/0c) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.225 [2024-11-09 17:17:14.966225] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.225 [2024-11-09 17:17:14.966239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.225 [2024-11-09 17:17:14.966296] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.225 [2024-11-09 17:17:14.966311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.225 [2024-11-09 17:17:14.966368] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.225 [2024-11-09 17:17:14.966384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.485 NEW_FUNC[1/2]: 0x4746c8 in feat_host_identifier /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:372 00:07:55.485 NEW_FUNC[2/2]: 0x1134b08 in nvmf_ctrlr_set_features_host_identifier /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1665 00:07:55.485 #42 NEW cov: 11881 ft: 14085 corp: 28/895b lim: 35 exec/s: 42 rss: 69Mb L: 35/35 MS: 1 CopyPart- 00:07:55.485 [2024-11-09 17:17:15.016137] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.485 [2024-11-09 17:17:15.016165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.485 [2024-11-09 17:17:15.016233] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.485 [2024-11-09 17:17:15.016247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.485 [2024-11-09 17:17:15.016303] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.485 [2024-11-09 17:17:15.016316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.485 [2024-11-09 17:17:15.016370] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.485 [2024-11-09 17:17:15.016385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.485 [2024-11-09 17:17:15.016442] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.485 [2024-11-09 17:17:15.016460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.485 [2024-11-09 17:17:15.046120] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.485 [2024-11-09 17:17:15.046148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.485 [2024-11-09 17:17:15.046203] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.485 [2024-11-09 17:17:15.046219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.485 [2024-11-09 17:17:15.046274] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.485 [2024-11-09 17:17:15.046288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.485 [2024-11-09 17:17:15.046343] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.485 [2024-11-09 17:17:15.046358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.485 #44 NEW cov: 11881 ft: 14160 corp: 29/925b lim: 35 exec/s: 44 rss: 69Mb L: 30/35 MS: 2 ChangeBinInt-EraseBytes- 00:07:55.485 [2024-11-09 17:17:15.076283] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.485 [2024-11-09 17:17:15.076310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.485 [2024-11-09 17:17:15.076381] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000087 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.485 [2024-11-09 17:17:15.076398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.485 [2024-11-09 17:17:15.076456] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.485 [2024-11-09 17:17:15.076471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.485 #45 NEW cov: 11881 ft: 14193 corp: 30/959b lim: 35 exec/s: 45 rss: 69Mb L: 34/35 MS: 1 ShuffleBytes- 00:07:55.485 [2024-11-09 17:17:15.106519] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.485 [2024-11-09 17:17:15.106555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.485 [2024-11-09 17:17:15.106627] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000087 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.485 [2024-11-09 17:17:15.106643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.485 [2024-11-09 17:17:15.106699] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.485 [2024-11-09 17:17:15.106713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.485 [2024-11-09 17:17:15.106770] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.485 [2024-11-09 17:17:15.106784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.485 #46 NEW cov: 11881 ft: 14287 corp: 31/994b lim: 35 exec/s: 46 rss: 69Mb L: 35/35 MS: 1 CopyPart- 00:07:55.485 [2024-11-09 17:17:15.146612] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.485 [2024-11-09 17:17:15.146646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.485 [2024-11-09 17:17:15.146701] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000bf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.485 [2024-11-09 17:17:15.146718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.485 [2024-11-09 17:17:15.146777] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.485 [2024-11-09 17:17:15.146792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.485 [2024-11-09 17:17:15.146850] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.485 [2024-11-09 17:17:15.146863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.485 [2024-11-09 17:17:15.146920] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.486 [2024-11-09 17:17:15.146933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.486 #47 NEW cov: 11881 ft: 14291 corp: 32/1029b lim: 35 exec/s: 47 rss: 69Mb L: 35/35 MS: 1 ChangeByte- 00:07:55.486 [2024-11-09 17:17:15.186672] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.486 [2024-11-09 17:17:15.186698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.486 [2024-11-09 17:17:15.186771] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.486 [2024-11-09 17:17:15.186788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.486 [2024-11-09 17:17:15.186834] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.486 [2024-11-09 17:17:15.186849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.486 [2024-11-09 17:17:15.186906] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.486 [2024-11-09 17:17:15.186922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.486 [2024-11-09 17:17:15.186979] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.486 [2024-11-09 17:17:15.186995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.486 #48 NEW cov: 11881 ft: 14306 corp: 33/1064b lim: 35 exec/s: 48 rss: 69Mb L: 35/35 MS: 1 CopyPart- 00:07:55.486 [2024-11-09 17:17:15.226829] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.486 [2024-11-09 17:17:15.226856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.486 [2024-11-09 17:17:15.226926] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.486 [2024-11-09 17:17:15.226945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.486 [2024-11-09 17:17:15.227002] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.486 [2024-11-09 17:17:15.227016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.486 [2024-11-09 17:17:15.227069] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.486 [2024-11-09 17:17:15.227084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.486 #49 NEW cov: 11881 ft: 14311 corp: 34/1099b lim: 35 exec/s: 49 rss: 70Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:55.746 [2024-11-09 17:17:15.266627] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.746 [2024-11-09 17:17:15.266653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.746 [2024-11-09 17:17:15.266724] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ad SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.746 [2024-11-09 17:17:15.266740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.746 #50 NEW cov: 11881 ft: 14395 corp: 35/1121b lim: 35 exec/s: 50 rss: 70Mb L: 22/35 MS: 1 ChangeByte- 00:07:55.746 [2024-11-09 17:17:15.306733] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.746 [2024-11-09 17:17:15.306760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.746 [2024-11-09 17:17:15.306831] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.746 [2024-11-09 17:17:15.306846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.746 [2024-11-09 17:17:15.306902] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ad SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.746 [2024-11-09 17:17:15.306917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.746 #51 NEW cov: 11881 ft: 14401 corp: 36/1143b lim: 35 exec/s: 51 rss: 70Mb L: 22/35 MS: 1 ShuffleBytes- 00:07:55.746 [2024-11-09 17:17:15.347171] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.746 [2024-11-09 17:17:15.347197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.746 [2024-11-09 17:17:15.347254] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.746 [2024-11-09 17:17:15.347269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.746 [2024-11-09 17:17:15.347323] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.746 [2024-11-09 17:17:15.347337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.746 [2024-11-09 17:17:15.347393] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.746 [2024-11-09 17:17:15.347408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.746 [2024-11-09 17:17:15.347465] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.746 [2024-11-09 17:17:15.347480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.746 #52 NEW cov: 11881 ft: 14412 corp: 37/1178b lim: 35 exec/s: 52 rss: 70Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:55.746 [2024-11-09 17:17:15.387307] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.746 [2024-11-09 17:17:15.387333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.746 [2024-11-09 17:17:15.387406] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.746 [2024-11-09 17:17:15.387421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.746 [2024-11-09 17:17:15.387479] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.746 [2024-11-09 17:17:15.387493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.746 [2024-11-09 17:17:15.387554] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.746 [2024-11-09 17:17:15.387570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.746 [2024-11-09 17:17:15.387626] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.746 [2024-11-09 17:17:15.387641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.746 #53 NEW cov: 11881 ft: 14481 corp: 38/1213b lim: 35 exec/s: 53 rss: 70Mb L: 35/35 MS: 1 CrossOver- 00:07:55.746 [2024-11-09 17:17:15.427217] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.746 [2024-11-09 17:17:15.427244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.746 [2024-11-09 17:17:15.427316] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000027 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.746 [2024-11-09 17:17:15.427331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.746 [2024-11-09 17:17:15.427387] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.746 [2024-11-09 17:17:15.427402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.746 [2024-11-09 17:17:15.427456] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.746 [2024-11-09 17:17:15.427471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.746 #54 NEW cov: 11881 ft: 14495 corp: 39/1245b lim: 35 exec/s: 54 rss: 70Mb L: 32/35 MS: 1 EraseBytes- 00:07:55.746 [2024-11-09 17:17:15.467542] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.746 [2024-11-09 17:17:15.467568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.746 [2024-11-09 17:17:15.467642] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.746 [2024-11-09 17:17:15.467661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.746 [2024-11-09 17:17:15.467715] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000bf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.746 [2024-11-09 17:17:15.467728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.746 [2024-11-09 17:17:15.467786] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.746 [2024-11-09 17:17:15.467801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.746 #55 NEW cov: 11881 ft: 14512 corp: 40/1280b lim: 35 exec/s: 55 rss: 70Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:55.746 [2024-11-09 17:17:15.507605] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.746 [2024-11-09 17:17:15.507631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.746 [2024-11-09 17:17:15.507703] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.746 [2024-11-09 17:17:15.507719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.746 [2024-11-09 17:17:15.507776] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.746 [2024-11-09 17:17:15.507791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.746 [2024-11-09 17:17:15.507846] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000bf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.746 [2024-11-09 17:17:15.507862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.746 [2024-11-09 17:17:15.507915] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.746 [2024-11-09 17:17:15.507930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:56.006 #56 NEW cov: 11881 ft: 14527 corp: 41/1315b lim: 35 exec/s: 56 rss: 70Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:56.006 [2024-11-09 17:17:15.547806] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.006 [2024-11-09 17:17:15.547832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.006 [2024-11-09 17:17:15.547905] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.006 [2024-11-09 17:17:15.547921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.006 [2024-11-09 17:17:15.547978] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.006 [2024-11-09 17:17:15.547993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.006 [2024-11-09 17:17:15.548046] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.006 [2024-11-09 17:17:15.548061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.006 [2024-11-09 17:17:15.548116] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.006 [2024-11-09 17:17:15.548150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:56.006 #57 NEW cov: 11881 ft: 14529 corp: 42/1350b lim: 35 exec/s: 57 rss: 70Mb L: 35/35 MS: 1 ChangeBit- 00:07:56.006 [2024-11-09 17:17:15.587913] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.006 [2024-11-09 17:17:15.587940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.006 [2024-11-09 17:17:15.588010] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.006 [2024-11-09 17:17:15.588025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.006 [2024-11-09 17:17:15.588083] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000bf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.006 [2024-11-09 17:17:15.588098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.006 [2024-11-09 17:17:15.588153] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.006 [2024-11-09 17:17:15.588168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:56.006 #58 NEW cov: 11881 ft: 14556 corp: 43/1385b lim: 35 exec/s: 58 rss: 70Mb L: 35/35 MS: 1 PersAutoDict- DE: "\001\005"- 00:07:56.006 [2024-11-09 17:17:15.628054] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.006 [2024-11-09 17:17:15.628081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.006 [2024-11-09 17:17:15.628139] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.006 [2024-11-09 17:17:15.628154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.006 [2024-11-09 17:17:15.628210] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.006 [2024-11-09 17:17:15.628226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.006 [2024-11-09 17:17:15.628282] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.006 [2024-11-09 17:17:15.628298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.006 [2024-11-09 17:17:15.628354] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.006 [2024-11-09 17:17:15.628369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:56.006 #59 NEW cov: 11881 ft: 14557 corp: 44/1420b lim: 35 exec/s: 29 rss: 70Mb L: 35/35 MS: 1 ChangeByte- 00:07:56.006 #59 DONE cov: 11881 ft: 14557 corp: 44/1420b lim: 35 exec/s: 29 rss: 70Mb 00:07:56.006 ###### Recommended dictionary. ###### 00:07:56.006 "\201\031\207\360\336\277\204\000" # Uses: 2 00:07:56.006 "\000\000" # Uses: 2 00:07:56.006 "\377\203\277\337\015I\261\204" # Uses: 0 00:07:56.006 "\001\005" # Uses: 1 00:07:56.006 ###### End of recommended dictionary. ###### 00:07:56.006 Done 59 runs in 2 second(s) 00:07:56.006 17:17:15 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_14.conf 00:07:56.266 17:17:15 -- ../common.sh@72 -- # (( i++ )) 00:07:56.266 17:17:15 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:56.266 17:17:15 -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:07:56.266 17:17:15 -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:07:56.266 17:17:15 -- nvmf/run.sh@24 -- # local timen=1 00:07:56.266 17:17:15 -- nvmf/run.sh@25 -- # local core=0x1 00:07:56.266 17:17:15 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:56.266 17:17:15 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:07:56.266 17:17:15 -- nvmf/run.sh@29 -- # printf %02d 15 00:07:56.266 17:17:15 -- nvmf/run.sh@29 -- # port=4415 00:07:56.266 17:17:15 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:56.266 17:17:15 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:07:56.266 17:17:15 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:56.266 17:17:15 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 -r /var/tmp/spdk15.sock 00:07:56.266 [2024-11-09 17:17:15.816909] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:56.266 [2024-11-09 17:17:15.816978] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid372445 ] 00:07:56.266 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.266 [2024-11-09 17:17:15.998658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.526 [2024-11-09 17:17:16.062256] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:56.526 [2024-11-09 17:17:16.062400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.526 [2024-11-09 17:17:16.120480] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.526 [2024-11-09 17:17:16.136872] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:07:56.526 INFO: Running with entropic power schedule (0xFF, 100). 00:07:56.526 INFO: Seed: 4128245960 00:07:56.526 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:56.526 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:56.526 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:56.526 INFO: A corpus is not provided, starting from an empty corpus 00:07:56.526 #2 INITED exec/s: 0 rss: 60Mb 00:07:56.526 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:56.526 This may also happen if the target rejected all inputs we tried so far 00:07:56.526 [2024-11-09 17:17:16.192133] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.526 [2024-11-09 17:17:16.192162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.526 [2024-11-09 17:17:16.192238] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.526 [2024-11-09 17:17:16.192252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.785 NEW_FUNC[1/670]: 0x44fe48 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:07:56.785 NEW_FUNC[2/670]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:56.785 #15 NEW cov: 11560 ft: 11561 corp: 2/20b lim: 35 exec/s: 0 rss: 68Mb L: 19/19 MS: 3 CopyPart-CopyPart-InsertRepeatedBytes- 00:07:56.785 [2024-11-09 17:17:16.492721] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000250 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.785 [2024-11-09 17:17:16.492756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.785 #16 NEW cov: 11673 ft: 12185 corp: 3/29b lim: 35 exec/s: 0 rss: 68Mb L: 9/19 MS: 1 CMP- DE: "P@\000\334\023\177\000\000"- 00:07:56.785 [2024-11-09 17:17:16.533064] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.785 [2024-11-09 17:17:16.533091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.785 [2024-11-09 17:17:16.533147] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.785 [2024-11-09 17:17:16.533161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.785 [2024-11-09 17:17:16.533215] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.785 [2024-11-09 17:17:16.533228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.785 [2024-11-09 17:17:16.533284] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.785 [2024-11-09 17:17:16.533297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.045 #17 NEW cov: 11679 ft: 13003 corp: 4/60b lim: 35 exec/s: 0 rss: 68Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:07:57.045 [2024-11-09 17:17:16.573200] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.045 [2024-11-09 17:17:16.573226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.045 [2024-11-09 17:17:16.573281] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.045 [2024-11-09 17:17:16.573295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.045 [2024-11-09 17:17:16.573348] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.045 [2024-11-09 17:17:16.573362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.045 [2024-11-09 17:17:16.573417] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.045 [2024-11-09 17:17:16.573430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.045 #18 NEW cov: 11764 ft: 13310 corp: 5/92b lim: 35 exec/s: 0 rss: 68Mb L: 32/32 MS: 1 InsertByte- 00:07:57.045 [2024-11-09 17:17:16.613058] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.045 [2024-11-09 17:17:16.613083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.045 [2024-11-09 17:17:16.613138] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.045 [2024-11-09 17:17:16.613151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.045 #19 NEW cov: 11764 ft: 13436 corp: 6/108b lim: 35 exec/s: 0 rss: 68Mb L: 16/32 MS: 1 EraseBytes- 00:07:57.045 [2024-11-09 17:17:16.653055] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000250 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.045 [2024-11-09 17:17:16.653081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.045 #20 NEW cov: 11764 ft: 13519 corp: 7/117b lim: 35 exec/s: 0 rss: 68Mb L: 9/32 MS: 1 ChangeByte- 00:07:57.045 [2024-11-09 17:17:16.693264] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.045 [2024-11-09 17:17:16.693289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.045 [2024-11-09 17:17:16.693358] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.045 [2024-11-09 17:17:16.693371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.045 #21 NEW cov: 11764 ft: 13581 corp: 8/136b lim: 35 exec/s: 0 rss: 68Mb L: 19/32 MS: 1 ChangeBinInt- 00:07:57.045 [2024-11-09 17:17:16.733263] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000250 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.045 [2024-11-09 17:17:16.733288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.045 #22 NEW cov: 11764 ft: 13642 corp: 9/145b lim: 35 exec/s: 0 rss: 68Mb L: 9/32 MS: 1 PersAutoDict- DE: "P@\000\334\023\177\000\000"- 00:07:57.045 [2024-11-09 17:17:16.773882] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.045 [2024-11-09 17:17:16.773906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.045 [2024-11-09 17:17:16.773962] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.045 [2024-11-09 17:17:16.773976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.045 [2024-11-09 17:17:16.774030] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.045 [2024-11-09 17:17:16.774059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.045 [2024-11-09 17:17:16.774116] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.045 [2024-11-09 17:17:16.774129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.045 [2024-11-09 17:17:16.774181] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.045 [2024-11-09 17:17:16.774195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:57.045 #23 NEW cov: 11764 ft: 13719 corp: 10/180b lim: 35 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:57.045 [2024-11-09 17:17:16.813671] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.045 [2024-11-09 17:17:16.813696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.045 [2024-11-09 17:17:16.813752] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.045 [2024-11-09 17:17:16.813766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.305 #24 NEW cov: 11764 ft: 13768 corp: 11/200b lim: 35 exec/s: 0 rss: 68Mb L: 20/35 MS: 1 InsertByte- 00:07:57.305 [2024-11-09 17:17:16.854114] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.305 [2024-11-09 17:17:16.854139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.305 [2024-11-09 17:17:16.854197] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.305 [2024-11-09 17:17:16.854211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.305 [2024-11-09 17:17:16.854265] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.305 [2024-11-09 17:17:16.854277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.305 [2024-11-09 17:17:16.854330] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.305 [2024-11-09 17:17:16.854343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.305 [2024-11-09 17:17:16.854395] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.305 [2024-11-09 17:17:16.854407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:57.305 #30 NEW cov: 11764 ft: 13794 corp: 12/235b lim: 35 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:57.305 [2024-11-09 17:17:16.893757] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000250 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.305 [2024-11-09 17:17:16.893783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.305 #31 NEW cov: 11764 ft: 13863 corp: 13/245b lim: 35 exec/s: 0 rss: 69Mb L: 10/35 MS: 1 InsertByte- 00:07:57.306 [2024-11-09 17:17:16.934323] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.306 [2024-11-09 17:17:16.934348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.306 [2024-11-09 17:17:16.934414] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.306 [2024-11-09 17:17:16.934428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.306 [2024-11-09 17:17:16.934483] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.306 [2024-11-09 17:17:16.934496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.306 [2024-11-09 17:17:16.934553] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.306 [2024-11-09 17:17:16.934566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.306 [2024-11-09 17:17:16.934620] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000374 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.306 [2024-11-09 17:17:16.934633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:57.306 #32 NEW cov: 11764 ft: 13888 corp: 14/280b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 CopyPart- 00:07:57.306 [2024-11-09 17:17:16.974350] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.306 [2024-11-09 17:17:16.974375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.306 [2024-11-09 17:17:16.974431] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.306 [2024-11-09 17:17:16.974444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.306 [2024-11-09 17:17:16.974500] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.306 [2024-11-09 17:17:16.974514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.306 [2024-11-09 17:17:16.974570] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.306 [2024-11-09 17:17:16.974583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.306 #33 NEW cov: 11764 ft: 13896 corp: 15/311b lim: 35 exec/s: 0 rss: 69Mb L: 31/35 MS: 1 ChangeBit- 00:07:57.306 [2024-11-09 17:17:17.014108] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000250 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.306 [2024-11-09 17:17:17.014133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.306 #34 NEW cov: 11764 ft: 13933 corp: 16/320b lim: 35 exec/s: 0 rss: 69Mb L: 9/35 MS: 1 ShuffleBytes- 00:07:57.306 [2024-11-09 17:17:17.054592] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.306 [2024-11-09 17:17:17.054617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.306 [2024-11-09 17:17:17.054673] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.306 [2024-11-09 17:17:17.054687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.306 [2024-11-09 17:17:17.054740] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.306 [2024-11-09 17:17:17.054770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.306 [2024-11-09 17:17:17.054824] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.306 [2024-11-09 17:17:17.054837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.566 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:57.566 #35 NEW cov: 11787 ft: 13970 corp: 17/351b lim: 35 exec/s: 0 rss: 69Mb L: 31/35 MS: 1 PersAutoDict- DE: "P@\000\334\023\177\000\000"- 00:07:57.566 [2024-11-09 17:17:17.104732] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.566 [2024-11-09 17:17:17.104758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.566 [2024-11-09 17:17:17.104813] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.566 [2024-11-09 17:17:17.104826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.566 [2024-11-09 17:17:17.104877] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.566 [2024-11-09 17:17:17.104890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.566 [2024-11-09 17:17:17.104945] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.566 [2024-11-09 17:17:17.104957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.566 #36 NEW cov: 11787 ft: 13987 corp: 18/384b lim: 35 exec/s: 0 rss: 69Mb L: 33/35 MS: 1 CopyPart- 00:07:57.566 [2024-11-09 17:17:17.144513] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000150 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.566 [2024-11-09 17:17:17.144543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.566 #37 NEW cov: 11787 ft: 14017 corp: 19/395b lim: 35 exec/s: 37 rss: 69Mb L: 11/35 MS: 1 InsertByte- 00:07:57.566 [2024-11-09 17:17:17.184630] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000250 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.566 [2024-11-09 17:17:17.184655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.566 #38 NEW cov: 11787 ft: 14034 corp: 20/404b lim: 35 exec/s: 38 rss: 69Mb L: 9/35 MS: 1 ChangeBit- 00:07:57.566 [2024-11-09 17:17:17.225070] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.566 [2024-11-09 17:17:17.225095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.566 [2024-11-09 17:17:17.225150] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.566 [2024-11-09 17:17:17.225163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.566 [2024-11-09 17:17:17.225205] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.566 [2024-11-09 17:17:17.225218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.566 [2024-11-09 17:17:17.225271] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.566 [2024-11-09 17:17:17.225283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.566 #39 NEW cov: 11787 ft: 14053 corp: 21/436b lim: 35 exec/s: 39 rss: 69Mb L: 32/35 MS: 1 ShuffleBytes- 00:07:57.566 [2024-11-09 17:17:17.264944] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.566 [2024-11-09 17:17:17.264968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.566 [2024-11-09 17:17:17.265023] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.566 [2024-11-09 17:17:17.265037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.566 #40 NEW cov: 11787 ft: 14109 corp: 22/452b lim: 35 exec/s: 40 rss: 69Mb L: 16/35 MS: 1 ChangeByte- 00:07:57.566 [2024-11-09 17:17:17.305216] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.566 [2024-11-09 17:17:17.305240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.566 [2024-11-09 17:17:17.305295] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.566 [2024-11-09 17:17:17.305309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.566 [2024-11-09 17:17:17.305362] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.566 [2024-11-09 17:17:17.305391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.566 #41 NEW cov: 11787 ft: 14279 corp: 23/473b lim: 35 exec/s: 41 rss: 69Mb L: 21/35 MS: 1 InsertByte- 00:07:57.826 [2024-11-09 17:17:17.345310] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.826 [2024-11-09 17:17:17.345335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.826 [2024-11-09 17:17:17.345390] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.826 [2024-11-09 17:17:17.345403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.826 [2024-11-09 17:17:17.345456] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.826 [2024-11-09 17:17:17.345469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.826 #42 NEW cov: 11787 ft: 14289 corp: 24/499b lim: 35 exec/s: 42 rss: 69Mb L: 26/35 MS: 1 CopyPart- 00:07:57.826 [2024-11-09 17:17:17.385406] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.826 [2024-11-09 17:17:17.385432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.826 [2024-11-09 17:17:17.385487] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.826 [2024-11-09 17:17:17.385501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.826 [2024-11-09 17:17:17.385557] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.826 [2024-11-09 17:17:17.385571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.826 #43 NEW cov: 11787 ft: 14300 corp: 25/526b lim: 35 exec/s: 43 rss: 70Mb L: 27/35 MS: 1 EraseBytes- 00:07:57.826 [2024-11-09 17:17:17.425653] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.826 [2024-11-09 17:17:17.425678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.826 [2024-11-09 17:17:17.425734] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.826 [2024-11-09 17:17:17.425747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.826 [2024-11-09 17:17:17.425802] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.826 [2024-11-09 17:17:17.425815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.826 [2024-11-09 17:17:17.425869] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.826 [2024-11-09 17:17:17.425882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.826 #44 NEW cov: 11787 ft: 14310 corp: 26/560b lim: 35 exec/s: 44 rss: 70Mb L: 34/35 MS: 1 CopyPart- 00:07:57.826 [2024-11-09 17:17:17.465492] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000200 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.826 [2024-11-09 17:17:17.465517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.826 [2024-11-09 17:17:17.465578] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.826 [2024-11-09 17:17:17.465600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.826 #45 NEW cov: 11787 ft: 14349 corp: 27/574b lim: 35 exec/s: 45 rss: 70Mb L: 14/35 MS: 1 CrossOver- 00:07:57.826 [2024-11-09 17:17:17.505489] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000250 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.826 [2024-11-09 17:17:17.505514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.826 #50 NEW cov: 11787 ft: 14357 corp: 28/583b lim: 35 exec/s: 50 rss: 70Mb L: 9/35 MS: 5 ChangeByte-ShuffleBytes-ChangeBit-ChangeBit-PersAutoDict- DE: "P@\000\334\023\177\000\000"- 00:07:57.826 [2024-11-09 17:17:17.545540] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000250 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.826 [2024-11-09 17:17:17.545565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.826 #51 NEW cov: 11787 ft: 14397 corp: 29/592b lim: 35 exec/s: 51 rss: 70Mb L: 9/35 MS: 1 ChangeBit- 00:07:57.826 [2024-11-09 17:17:17.576088] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.826 [2024-11-09 17:17:17.576113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.826 [2024-11-09 17:17:17.576168] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000600 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.826 [2024-11-09 17:17:17.576181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.826 [2024-11-09 17:17:17.576236] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.826 [2024-11-09 17:17:17.576266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.826 [2024-11-09 17:17:17.576320] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.826 [2024-11-09 17:17:17.576333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.086 #52 NEW cov: 11787 ft: 14408 corp: 30/620b lim: 35 exec/s: 52 rss: 70Mb L: 28/35 MS: 1 PersAutoDict- DE: "P@\000\334\023\177\000\000"- 00:07:58.086 [2024-11-09 17:17:17.616295] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.086 [2024-11-09 17:17:17.616320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.086 [2024-11-09 17:17:17.616376] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.086 [2024-11-09 17:17:17.616390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.086 [2024-11-09 17:17:17.616445] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.086 [2024-11-09 17:17:17.616458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.086 [2024-11-09 17:17:17.616511] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.086 [2024-11-09 17:17:17.616527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.086 [2024-11-09 17:17:17.616580] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000600 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.086 [2024-11-09 17:17:17.616593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:58.086 #53 NEW cov: 11787 ft: 14437 corp: 31/655b lim: 35 exec/s: 53 rss: 70Mb L: 35/35 MS: 1 InsertByte- 00:07:58.086 [2024-11-09 17:17:17.656331] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.086 [2024-11-09 17:17:17.656356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.086 [2024-11-09 17:17:17.656465] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.086 [2024-11-09 17:17:17.656479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.086 [2024-11-09 17:17:17.656537] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.086 [2024-11-09 17:17:17.656550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.086 NEW_FUNC[1/1]: 0x46d7e8 in feat_number_of_queues /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:318 00:07:58.086 #54 NEW cov: 11819 ft: 14534 corp: 32/685b lim: 35 exec/s: 54 rss: 70Mb L: 30/35 MS: 1 InsertRepeatedBytes- 00:07:58.086 [2024-11-09 17:17:17.696436] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.086 [2024-11-09 17:17:17.696461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.086 [2024-11-09 17:17:17.696536] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.086 [2024-11-09 17:17:17.696550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.086 [2024-11-09 17:17:17.696602] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.086 [2024-11-09 17:17:17.696615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.086 [2024-11-09 17:17:17.696669] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.086 [2024-11-09 17:17:17.696682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.086 #55 NEW cov: 11819 ft: 14544 corp: 33/717b lim: 35 exec/s: 55 rss: 70Mb L: 32/35 MS: 1 CopyPart- 00:07:58.086 [2024-11-09 17:17:17.736175] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000200 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.086 [2024-11-09 17:17:17.736201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.086 #56 NEW cov: 11819 ft: 14632 corp: 34/726b lim: 35 exec/s: 56 rss: 70Mb L: 9/35 MS: 1 ChangeByte- 00:07:58.087 [2024-11-09 17:17:17.776669] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.087 [2024-11-09 17:17:17.776694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.087 [2024-11-09 17:17:17.776801] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.087 [2024-11-09 17:17:17.776815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.087 [2024-11-09 17:17:17.776869] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.087 [2024-11-09 17:17:17.776882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.087 #57 NEW cov: 11819 ft: 14643 corp: 35/756b lim: 35 exec/s: 57 rss: 70Mb L: 30/35 MS: 1 ChangeBinInt- 00:07:58.087 [2024-11-09 17:17:17.816543] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000250 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.087 [2024-11-09 17:17:17.816568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.087 [2024-11-09 17:17:17.816626] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000600 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.087 [2024-11-09 17:17:17.816640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.087 #58 NEW cov: 11819 ft: 14688 corp: 36/774b lim: 35 exec/s: 58 rss: 70Mb L: 18/35 MS: 1 CrossOver- 00:07:58.347 [2024-11-09 17:17:17.856927] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.347 [2024-11-09 17:17:17.856952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.347 [2024-11-09 17:17:17.857076] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.347 [2024-11-09 17:17:17.857091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.347 [2024-11-09 17:17:17.857146] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.347 [2024-11-09 17:17:17.857159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.347 [2024-11-09 17:17:17.897006] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.347 [2024-11-09 17:17:17.897032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.347 [2024-11-09 17:17:17.897144] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000200 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.347 [2024-11-09 17:17:17.897158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.347 [2024-11-09 17:17:17.897213] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.347 [2024-11-09 17:17:17.897225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.347 #60 NEW cov: 11819 ft: 14696 corp: 37/804b lim: 35 exec/s: 60 rss: 70Mb L: 30/35 MS: 2 ChangeBinInt-PersAutoDict- DE: "P@\000\334\023\177\000\000"- 00:07:58.347 [2024-11-09 17:17:17.937138] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.347 [2024-11-09 17:17:17.937163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.347 [2024-11-09 17:17:17.937220] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.347 [2024-11-09 17:17:17.937233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.347 [2024-11-09 17:17:17.937289] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.347 [2024-11-09 17:17:17.937302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.347 [2024-11-09 17:17:17.937357] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000000dc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.347 [2024-11-09 17:17:17.937373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.347 #61 NEW cov: 11819 ft: 14721 corp: 38/838b lim: 35 exec/s: 61 rss: 70Mb L: 34/35 MS: 1 PersAutoDict- DE: "P@\000\334\023\177\000\000"- 00:07:58.347 [2024-11-09 17:17:17.977269] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.347 [2024-11-09 17:17:17.977295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.347 [2024-11-09 17:17:17.977350] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.347 [2024-11-09 17:17:17.977363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.347 [2024-11-09 17:17:17.977420] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.347 [2024-11-09 17:17:17.977433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.347 [2024-11-09 17:17:17.977491] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.347 [2024-11-09 17:17:17.977504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.347 #62 NEW cov: 11819 ft: 14741 corp: 39/870b lim: 35 exec/s: 62 rss: 70Mb L: 32/35 MS: 1 ChangeByte- 00:07:58.347 [2024-11-09 17:17:18.016897] ctrlr.c:1720:nvmf_ctrlr_get_features_reservation_notification_mask: *ERROR*: Set Features - Invalid Namespace ID 00:07:58.347 [2024-11-09 17:17:18.017021] ctrlr.c:1720:nvmf_ctrlr_get_features_reservation_notification_mask: *ERROR*: Set Features - Invalid Namespace ID 00:07:58.347 [2024-11-09 17:17:18.017128] ctrlr.c:1720:nvmf_ctrlr_get_features_reservation_notification_mask: *ERROR*: Set Features - Invalid Namespace ID 00:07:58.347 [2024-11-09 17:17:18.017443] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE MASK cid:4 cdw10:00000482 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.347 [2024-11-09 17:17:18.017469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.347 [2024-11-09 17:17:18.017533] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE MASK cid:5 cdw10:00000482 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.347 [2024-11-09 17:17:18.017547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.347 [2024-11-09 17:17:18.017602] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE MASK cid:6 cdw10:00000482 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.347 [2024-11-09 17:17:18.017615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.347 [2024-11-09 17:17:18.017669] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000250 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.347 [2024-11-09 17:17:18.017682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.347 NEW_FUNC[1/1]: 0x1124d78 in nvmf_ctrlr_get_features_reservation_notification_mask /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1704 00:07:58.347 #63 NEW cov: 11844 ft: 14782 corp: 40/900b lim: 35 exec/s: 63 rss: 70Mb L: 30/35 MS: 1 InsertRepeatedBytes- 00:07:58.347 [2024-11-09 17:17:18.067481] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.347 [2024-11-09 17:17:18.067506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.347 [2024-11-09 17:17:18.067613] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.347 [2024-11-09 17:17:18.067631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.347 [2024-11-09 17:17:18.067686] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.347 [2024-11-09 17:17:18.067699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.347 #64 NEW cov: 11844 ft: 14800 corp: 41/928b lim: 35 exec/s: 64 rss: 70Mb L: 28/35 MS: 1 EraseBytes- 00:07:58.347 [2024-11-09 17:17:18.107581] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.347 [2024-11-09 17:17:18.107608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.347 [2024-11-09 17:17:18.107664] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.347 [2024-11-09 17:17:18.107678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.347 [2024-11-09 17:17:18.107750] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.347 [2024-11-09 17:17:18.107764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.347 [2024-11-09 17:17:18.107822] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.347 [2024-11-09 17:17:18.107836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.607 #65 NEW cov: 11844 ft: 14806 corp: 42/959b lim: 35 exec/s: 65 rss: 70Mb L: 31/35 MS: 1 CopyPart- 00:07:58.607 [2024-11-09 17:17:18.147708] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.607 [2024-11-09 17:17:18.147733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.607 [2024-11-09 17:17:18.147788] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.607 [2024-11-09 17:17:18.147802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.607 [2024-11-09 17:17:18.147857] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000021 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.607 [2024-11-09 17:17:18.147869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.607 [2024-11-09 17:17:18.147926] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.607 [2024-11-09 17:17:18.147940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.607 #66 NEW cov: 11844 ft: 14839 corp: 43/992b lim: 35 exec/s: 33 rss: 70Mb L: 33/35 MS: 1 ChangeBinInt- 00:07:58.607 #66 DONE cov: 11844 ft: 14839 corp: 43/992b lim: 35 exec/s: 33 rss: 70Mb 00:07:58.607 ###### Recommended dictionary. ###### 00:07:58.607 "P@\000\334\023\177\000\000" # Uses: 6 00:07:58.607 ###### End of recommended dictionary. ###### 00:07:58.607 Done 66 runs in 2 second(s) 00:07:58.607 17:17:18 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_15.conf 00:07:58.607 17:17:18 -- ../common.sh@72 -- # (( i++ )) 00:07:58.607 17:17:18 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:58.607 17:17:18 -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:07:58.607 17:17:18 -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:07:58.607 17:17:18 -- nvmf/run.sh@24 -- # local timen=1 00:07:58.607 17:17:18 -- nvmf/run.sh@25 -- # local core=0x1 00:07:58.607 17:17:18 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:58.607 17:17:18 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:07:58.607 17:17:18 -- nvmf/run.sh@29 -- # printf %02d 16 00:07:58.607 17:17:18 -- nvmf/run.sh@29 -- # port=4416 00:07:58.607 17:17:18 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:58.607 17:17:18 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:07:58.607 17:17:18 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:58.607 17:17:18 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 -r /var/tmp/spdk16.sock 00:07:58.607 [2024-11-09 17:17:18.341808] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:58.607 [2024-11-09 17:17:18.341903] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid372988 ] 00:07:58.607 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.867 [2024-11-09 17:17:18.518654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.867 [2024-11-09 17:17:18.581965] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:58.867 [2024-11-09 17:17:18.582096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.127 [2024-11-09 17:17:18.640102] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.127 [2024-11-09 17:17:18.656457] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:07:59.127 INFO: Running with entropic power schedule (0xFF, 100). 00:07:59.127 INFO: Seed: 2352271952 00:07:59.127 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:59.127 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:59.127 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:59.127 INFO: A corpus is not provided, starting from an empty corpus 00:07:59.127 #2 INITED exec/s: 0 rss: 60Mb 00:07:59.127 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:59.127 This may also happen if the target rejected all inputs we tried so far 00:07:59.127 [2024-11-09 17:17:18.701594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128518282651045 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.127 [2024-11-09 17:17:18.701625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.127 [2024-11-09 17:17:18.701676] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11936128518282651045 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.127 [2024-11-09 17:17:18.701692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.387 NEW_FUNC[1/669]: 0x451308 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:07:59.387 NEW_FUNC[2/669]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:59.387 #14 NEW cov: 11655 ft: 11655 corp: 2/52b lim: 105 exec/s: 0 rss: 68Mb L: 51/51 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:59.387 [2024-11-09 17:17:18.992253] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13093571280822973877 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.387 [2024-11-09 17:17:18.992289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.387 [2024-11-09 17:17:18.992353] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.387 [2024-11-09 17:17:18.992372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.387 NEW_FUNC[1/2]: 0xf41168 in posix_sock_recv /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/sock/posix/posix.c:1597 00:07:59.387 NEW_FUNC[2/2]: 0xf414c8 in posix_sock_readv /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/sock/posix/posix.c:1547 00:07:59.387 #17 NEW cov: 11776 ft: 12037 corp: 3/97b lim: 105 exec/s: 0 rss: 69Mb L: 45/51 MS: 3 CrossOver-ShuffleBytes-InsertRepeatedBytes- 00:07:59.387 [2024-11-09 17:17:19.032142] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128517081554597 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.387 [2024-11-09 17:17:19.032170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.387 #22 NEW cov: 11782 ft: 12667 corp: 4/127b lim: 105 exec/s: 0 rss: 69Mb L: 30/51 MS: 5 ChangeBit-InsertByte-ShuffleBytes-CopyPart-CrossOver- 00:07:59.387 [2024-11-09 17:17:19.072372] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128501102781861 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.387 [2024-11-09 17:17:19.072399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.387 [2024-11-09 17:17:19.072440] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11936128518282651045 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.387 [2024-11-09 17:17:19.072455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.387 #23 NEW cov: 11867 ft: 13008 corp: 5/178b lim: 105 exec/s: 0 rss: 69Mb L: 51/51 MS: 1 ChangeBit- 00:07:59.387 [2024-11-09 17:17:19.112508] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128501102781861 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.387 [2024-11-09 17:17:19.112540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.387 [2024-11-09 17:17:19.112577] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11936128518282651045 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.387 [2024-11-09 17:17:19.112591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.387 #24 NEW cov: 11867 ft: 13133 corp: 6/229b lim: 105 exec/s: 0 rss: 69Mb L: 51/51 MS: 1 ChangeBit- 00:07:59.387 [2024-11-09 17:17:19.152660] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128518282651045 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.387 [2024-11-09 17:17:19.152688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.387 [2024-11-09 17:17:19.152738] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11936128518282651045 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.387 [2024-11-09 17:17:19.152753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.646 #25 NEW cov: 11867 ft: 13244 corp: 7/280b lim: 105 exec/s: 0 rss: 69Mb L: 51/51 MS: 1 CopyPart- 00:07:59.646 [2024-11-09 17:17:19.192591] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128517081554597 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.646 [2024-11-09 17:17:19.192618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.646 #26 NEW cov: 11867 ft: 13386 corp: 8/310b lim: 105 exec/s: 0 rss: 69Mb L: 30/51 MS: 1 ChangeByte- 00:07:59.646 [2024-11-09 17:17:19.232821] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128518282651045 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.646 [2024-11-09 17:17:19.232853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.646 [2024-11-09 17:17:19.232906] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11936128518282651045 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.646 [2024-11-09 17:17:19.232921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.646 #32 NEW cov: 11867 ft: 13425 corp: 9/361b lim: 105 exec/s: 0 rss: 69Mb L: 51/51 MS: 1 ChangeBit- 00:07:59.646 [2024-11-09 17:17:19.273164] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13093571280822973877 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.646 [2024-11-09 17:17:19.273191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.646 [2024-11-09 17:17:19.273235] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13046945784446432693 len:4113 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.646 [2024-11-09 17:17:19.273251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.646 [2024-11-09 17:17:19.273302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:1157442765409226768 len:4113 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.646 [2024-11-09 17:17:19.273332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.646 [2024-11-09 17:17:19.273386] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:1157442765409226768 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.646 [2024-11-09 17:17:19.273402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:59.646 #38 NEW cov: 11867 ft: 13984 corp: 10/453b lim: 105 exec/s: 0 rss: 69Mb L: 92/92 MS: 1 InsertRepeatedBytes- 00:07:59.646 [2024-11-09 17:17:19.323124] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128501102781861 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.646 [2024-11-09 17:17:19.323152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.646 [2024-11-09 17:17:19.323217] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11936128518282651045 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.646 [2024-11-09 17:17:19.323233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.646 #39 NEW cov: 11867 ft: 14126 corp: 11/504b lim: 105 exec/s: 0 rss: 69Mb L: 51/92 MS: 1 ShuffleBytes- 00:07:59.646 [2024-11-09 17:17:19.363122] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128517081554597 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.646 [2024-11-09 17:17:19.363150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.646 #45 NEW cov: 11867 ft: 14163 corp: 12/534b lim: 105 exec/s: 0 rss: 69Mb L: 30/92 MS: 1 ChangeByte- 00:07:59.646 [2024-11-09 17:17:19.403339] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128518282651045 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.647 [2024-11-09 17:17:19.403367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.647 [2024-11-09 17:17:19.403403] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11936128518282651045 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.647 [2024-11-09 17:17:19.403418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.906 #46 NEW cov: 11867 ft: 14188 corp: 13/585b lim: 105 exec/s: 0 rss: 69Mb L: 51/92 MS: 1 ShuffleBytes- 00:07:59.906 [2024-11-09 17:17:19.443308] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128517081587365 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.906 [2024-11-09 17:17:19.443336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.906 #47 NEW cov: 11867 ft: 14214 corp: 14/615b lim: 105 exec/s: 0 rss: 69Mb L: 30/92 MS: 1 ChangeBit- 00:07:59.906 [2024-11-09 17:17:19.483440] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128517081554597 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.906 [2024-11-09 17:17:19.483468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.906 #48 NEW cov: 11867 ft: 14238 corp: 15/647b lim: 105 exec/s: 0 rss: 70Mb L: 32/92 MS: 1 CMP- DE: "\001\000"- 00:07:59.906 [2024-11-09 17:17:19.523711] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128518282651045 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.906 [2024-11-09 17:17:19.523739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.906 [2024-11-09 17:17:19.523804] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11936128518282651045 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.906 [2024-11-09 17:17:19.523821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.906 #49 NEW cov: 11867 ft: 14313 corp: 16/698b lim: 105 exec/s: 0 rss: 70Mb L: 51/92 MS: 1 ChangeByte- 00:07:59.906 [2024-11-09 17:17:19.563725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128517081554597 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.906 [2024-11-09 17:17:19.563753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.906 #50 NEW cov: 11867 ft: 14336 corp: 17/728b lim: 105 exec/s: 0 rss: 70Mb L: 30/92 MS: 1 ChangeBit- 00:07:59.906 [2024-11-09 17:17:19.603816] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128517192132005 len:11174 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.906 [2024-11-09 17:17:19.603843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.906 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:59.906 #53 NEW cov: 11890 ft: 14392 corp: 18/759b lim: 105 exec/s: 0 rss: 70Mb L: 31/92 MS: 3 InsertByte-InsertRepeatedBytes-CrossOver- 00:07:59.906 [2024-11-09 17:17:19.644014] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128517192132005 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.906 [2024-11-09 17:17:19.644042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.906 [2024-11-09 17:17:19.644082] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11936128518282651045 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.906 [2024-11-09 17:17:19.644097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.906 #57 NEW cov: 11890 ft: 14452 corp: 19/807b lim: 105 exec/s: 0 rss: 70Mb L: 48/92 MS: 4 EraseBytes-ChangeByte-CopyPart-CrossOver- 00:08:00.165 [2024-11-09 17:17:19.684168] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128518282651045 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.165 [2024-11-09 17:17:19.684196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.165 [2024-11-09 17:17:19.684248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11936128518282651045 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.165 [2024-11-09 17:17:19.684263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.165 #58 NEW cov: 11890 ft: 14487 corp: 20/858b lim: 105 exec/s: 58 rss: 70Mb L: 51/92 MS: 1 ChangeBinInt- 00:08:00.165 [2024-11-09 17:17:19.724145] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128517081554597 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.165 [2024-11-09 17:17:19.724173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.165 #59 NEW cov: 11890 ft: 14509 corp: 21/889b lim: 105 exec/s: 59 rss: 70Mb L: 31/92 MS: 1 InsertByte- 00:08:00.165 [2024-11-09 17:17:19.764271] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128517081587365 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.165 [2024-11-09 17:17:19.764298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.165 #60 NEW cov: 11890 ft: 14622 corp: 22/919b lim: 105 exec/s: 60 rss: 70Mb L: 30/92 MS: 1 ShuffleBytes- 00:08:00.165 [2024-11-09 17:17:19.804398] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128517081554597 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.165 [2024-11-09 17:17:19.804425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.165 #61 NEW cov: 11890 ft: 14652 corp: 23/957b lim: 105 exec/s: 61 rss: 70Mb L: 38/92 MS: 1 InsertRepeatedBytes- 00:08:00.165 [2024-11-09 17:17:19.844629] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128518282651045 len:3679 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.165 [2024-11-09 17:17:19.844656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.165 [2024-11-09 17:17:19.844704] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11936128518282651045 len:11174 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.165 [2024-11-09 17:17:19.844719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.165 #67 NEW cov: 11890 ft: 14691 corp: 24/1004b lim: 105 exec/s: 67 rss: 70Mb L: 47/92 MS: 1 CrossOver- 00:08:00.165 [2024-11-09 17:17:19.884952] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13093571280822973877 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.165 [2024-11-09 17:17:19.884979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.165 [2024-11-09 17:17:19.885028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13046945784446432693 len:4113 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.165 [2024-11-09 17:17:19.885043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.165 [2024-11-09 17:17:19.885095] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:1157442765409226768 len:4113 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.165 [2024-11-09 17:17:19.885110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.165 [2024-11-09 17:17:19.885163] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:1157442765409226768 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.165 [2024-11-09 17:17:19.885178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.165 #68 NEW cov: 11890 ft: 14714 corp: 25/1097b lim: 105 exec/s: 68 rss: 70Mb L: 93/93 MS: 1 InsertByte- 00:08:00.165 [2024-11-09 17:17:19.924755] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:3649505202830419365 len:11174 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.165 [2024-11-09 17:17:19.924782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.425 #69 NEW cov: 11890 ft: 14731 corp: 26/1128b lim: 105 exec/s: 69 rss: 70Mb L: 31/93 MS: 1 ChangeByte- 00:08:00.425 [2024-11-09 17:17:19.964845] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:3603161887231288741 len:42251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.425 [2024-11-09 17:17:19.964872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.425 #70 NEW cov: 11890 ft: 14772 corp: 27/1151b lim: 105 exec/s: 70 rss: 70Mb L: 23/93 MS: 1 EraseBytes- 00:08:00.425 [2024-11-09 17:17:20.005009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:3649505202830419365 len:11174 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.425 [2024-11-09 17:17:20.005037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.425 #71 NEW cov: 11890 ft: 14801 corp: 28/1182b lim: 105 exec/s: 71 rss: 70Mb L: 31/93 MS: 1 ChangeBit- 00:08:00.425 [2024-11-09 17:17:20.045268] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128501102781861 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.425 [2024-11-09 17:17:20.045298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.425 [2024-11-09 17:17:20.045352] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11936128518282651045 len:257 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.425 [2024-11-09 17:17:20.045367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.425 #72 NEW cov: 11890 ft: 14864 corp: 29/1235b lim: 105 exec/s: 72 rss: 70Mb L: 53/93 MS: 1 PersAutoDict- DE: "\001\000"- 00:08:00.425 [2024-11-09 17:17:20.085825] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128517081554597 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.425 [2024-11-09 17:17:20.085855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.425 #73 NEW cov: 11890 ft: 14881 corp: 30/1266b lim: 105 exec/s: 73 rss: 70Mb L: 31/93 MS: 1 InsertByte- 00:08:00.425 [2024-11-09 17:17:20.125317] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12008186111119482533 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.425 [2024-11-09 17:17:20.125346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.425 #74 NEW cov: 11890 ft: 14898 corp: 31/1298b lim: 105 exec/s: 74 rss: 70Mb L: 32/93 MS: 1 ChangeBinInt- 00:08:00.425 [2024-11-09 17:17:20.165583] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128501102781861 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.425 [2024-11-09 17:17:20.165622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.425 [2024-11-09 17:17:20.165691] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11936128518282651045 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.425 [2024-11-09 17:17:20.165707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.425 #75 NEW cov: 11890 ft: 14929 corp: 32/1349b lim: 105 exec/s: 75 rss: 70Mb L: 51/93 MS: 1 ChangeBinInt- 00:08:00.685 [2024-11-09 17:17:20.205907] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128501102781861 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.685 [2024-11-09 17:17:20.205939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.685 [2024-11-09 17:17:20.205975] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11936128518282651045 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.685 [2024-11-09 17:17:20.205991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.685 [2024-11-09 17:17:20.206044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11935958093980345765 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.685 [2024-11-09 17:17:20.206060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.685 [2024-11-09 17:17:20.206112] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11936128518282651045 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.685 [2024-11-09 17:17:20.206127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.685 #76 NEW cov: 11890 ft: 14941 corp: 33/1441b lim: 105 exec/s: 76 rss: 70Mb L: 92/93 MS: 1 CrossOver- 00:08:00.685 [2024-11-09 17:17:20.246046] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13093571280822973877 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.685 [2024-11-09 17:17:20.246072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.685 [2024-11-09 17:17:20.246135] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13046945784446432693 len:23809 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.685 [2024-11-09 17:17:20.246151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.685 [2024-11-09 17:17:20.246204] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:1157442765409226768 len:4113 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.685 [2024-11-09 17:17:20.246220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.685 [2024-11-09 17:17:20.246273] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:1157442765409226768 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.685 [2024-11-09 17:17:20.246289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.685 #77 NEW cov: 11890 ft: 14980 corp: 34/1534b lim: 105 exec/s: 77 rss: 70Mb L: 93/93 MS: 1 ChangeBinInt- 00:08:00.685 [2024-11-09 17:17:20.286179] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13093571280822973877 len:46518 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.685 [2024-11-09 17:17:20.286206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.685 [2024-11-09 17:17:20.286259] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13046945784446432693 len:4113 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.685 [2024-11-09 17:17:20.286274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.685 [2024-11-09 17:17:20.286327] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:1157442765409226768 len:4113 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.685 [2024-11-09 17:17:20.286343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.685 [2024-11-09 17:17:20.286396] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:1157442766706138461 len:4113 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.685 [2024-11-09 17:17:20.286413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.685 #78 NEW cov: 11890 ft: 14981 corp: 35/1631b lim: 105 exec/s: 78 rss: 70Mb L: 97/97 MS: 1 InsertRepeatedBytes- 00:08:00.685 [2024-11-09 17:17:20.325898] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128517192132005 len:11174 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.685 [2024-11-09 17:17:20.325925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.685 #79 NEW cov: 11890 ft: 15000 corp: 36/1662b lim: 105 exec/s: 79 rss: 70Mb L: 31/97 MS: 1 CrossOver- 00:08:00.685 [2024-11-09 17:17:20.366148] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128501102781861 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.685 [2024-11-09 17:17:20.366175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.685 [2024-11-09 17:17:20.366209] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11936128518282651045 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.685 [2024-11-09 17:17:20.366224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.685 #80 NEW cov: 11890 ft: 15013 corp: 37/1708b lim: 105 exec/s: 80 rss: 70Mb L: 46/97 MS: 1 EraseBytes- 00:08:00.685 [2024-11-09 17:17:20.396148] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128517081554597 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.685 [2024-11-09 17:17:20.396174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.685 #81 NEW cov: 11890 ft: 15037 corp: 38/1738b lim: 105 exec/s: 81 rss: 70Mb L: 30/97 MS: 1 ChangeBit- 00:08:00.685 [2024-11-09 17:17:20.436401] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128501102781861 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.685 [2024-11-09 17:17:20.436427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.685 [2024-11-09 17:17:20.436481] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11936128518282651045 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.685 [2024-11-09 17:17:20.436497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.945 #82 NEW cov: 11890 ft: 15046 corp: 39/1789b lim: 105 exec/s: 82 rss: 70Mb L: 51/97 MS: 1 ChangeByte- 00:08:00.945 [2024-11-09 17:17:20.476474] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128501102781861 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.945 [2024-11-09 17:17:20.476501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.945 [2024-11-09 17:17:20.476541] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11936128518282651045 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.945 [2024-11-09 17:17:20.476557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.945 #83 NEW cov: 11890 ft: 15051 corp: 40/1835b lim: 105 exec/s: 83 rss: 70Mb L: 46/97 MS: 1 ShuffleBytes- 00:08:00.945 [2024-11-09 17:17:20.516639] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936227457149281701 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.945 [2024-11-09 17:17:20.516666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.945 [2024-11-09 17:17:20.516703] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11936128518282651045 len:257 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.945 [2024-11-09 17:17:20.516721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.945 #84 NEW cov: 11890 ft: 15079 corp: 41/1888b lim: 105 exec/s: 84 rss: 70Mb L: 53/97 MS: 1 ChangeByte- 00:08:00.945 [2024-11-09 17:17:20.556653] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128517081554597 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.945 [2024-11-09 17:17:20.556680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.945 #85 NEW cov: 11890 ft: 15093 corp: 42/1918b lim: 105 exec/s: 85 rss: 70Mb L: 30/97 MS: 1 ChangeByte- 00:08:00.945 [2024-11-09 17:17:20.596725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128517081554597 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.945 [2024-11-09 17:17:20.596751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.945 #86 NEW cov: 11890 ft: 15097 corp: 43/1949b lim: 105 exec/s: 86 rss: 70Mb L: 31/97 MS: 1 InsertByte- 00:08:00.945 [2024-11-09 17:17:20.636820] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11936128501102781861 len:42406 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.945 [2024-11-09 17:17:20.636847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.945 #87 NEW cov: 11890 ft: 15118 corp: 44/1978b lim: 105 exec/s: 87 rss: 70Mb L: 29/97 MS: 1 EraseBytes- 00:08:00.945 [2024-11-09 17:17:20.676960] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:3649505202830419365 len:11174 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.945 [2024-11-09 17:17:20.676986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.945 #88 NEW cov: 11890 ft: 15136 corp: 45/2009b lim: 105 exec/s: 44 rss: 70Mb L: 31/97 MS: 1 CrossOver- 00:08:00.945 #88 DONE cov: 11890 ft: 15136 corp: 45/2009b lim: 105 exec/s: 44 rss: 70Mb 00:08:00.945 ###### Recommended dictionary. ###### 00:08:00.945 "\001\000" # Uses: 2 00:08:00.945 ###### End of recommended dictionary. ###### 00:08:00.945 Done 88 runs in 2 second(s) 00:08:01.204 17:17:20 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_16.conf 00:08:01.204 17:17:20 -- ../common.sh@72 -- # (( i++ )) 00:08:01.204 17:17:20 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:01.204 17:17:20 -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:08:01.204 17:17:20 -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:08:01.204 17:17:20 -- nvmf/run.sh@24 -- # local timen=1 00:08:01.204 17:17:20 -- nvmf/run.sh@25 -- # local core=0x1 00:08:01.204 17:17:20 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:01.204 17:17:20 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:08:01.204 17:17:20 -- nvmf/run.sh@29 -- # printf %02d 17 00:08:01.204 17:17:20 -- nvmf/run.sh@29 -- # port=4417 00:08:01.204 17:17:20 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:01.204 17:17:20 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:08:01.204 17:17:20 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:01.204 17:17:20 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 -r /var/tmp/spdk17.sock 00:08:01.204 [2024-11-09 17:17:20.869679] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:01.204 [2024-11-09 17:17:20.869750] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid373303 ] 00:08:01.205 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.464 [2024-11-09 17:17:21.059539] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.464 [2024-11-09 17:17:21.123519] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:01.464 [2024-11-09 17:17:21.123655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.464 [2024-11-09 17:17:21.181597] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.464 [2024-11-09 17:17:21.197976] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:08:01.464 INFO: Running with entropic power schedule (0xFF, 100). 00:08:01.464 INFO: Seed: 599318209 00:08:01.464 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:01.464 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:01.464 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:01.464 INFO: A corpus is not provided, starting from an empty corpus 00:08:01.464 #2 INITED exec/s: 0 rss: 61Mb 00:08:01.464 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:01.464 This may also happen if the target rejected all inputs we tried so far 00:08:01.723 [2024-11-09 17:17:21.253010] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13889313184910721216 len:49345 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.723 [2024-11-09 17:17:21.253042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.982 NEW_FUNC[1/672]: 0x4545f8 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:08:01.982 NEW_FUNC[2/672]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:01.982 #14 NEW cov: 11684 ft: 11685 corp: 2/40b lim: 120 exec/s: 0 rss: 68Mb L: 39/39 MS: 2 ChangeByte-InsertRepeatedBytes- 00:08:01.982 [2024-11-09 17:17:21.573784] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13889313184910721216 len:49345 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.982 [2024-11-09 17:17:21.573815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.982 #25 NEW cov: 11797 ft: 11977 corp: 3/79b lim: 120 exec/s: 0 rss: 68Mb L: 39/39 MS: 1 ChangeBinInt- 00:08:01.982 [2024-11-09 17:17:21.614142] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3070230528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.983 [2024-11-09 17:17:21.614171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.983 [2024-11-09 17:17:21.614205] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.983 [2024-11-09 17:17:21.614221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.983 [2024-11-09 17:17:21.614273] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.983 [2024-11-09 17:17:21.614288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.983 #29 NEW cov: 11803 ft: 13057 corp: 4/169b lim: 120 exec/s: 0 rss: 68Mb L: 90/90 MS: 4 ShuffleBytes-CrossOver-ChangeBinInt-InsertRepeatedBytes- 00:08:01.983 [2024-11-09 17:17:21.653943] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13889313455493660864 len:62913 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.983 [2024-11-09 17:17:21.653969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.983 #30 NEW cov: 11888 ft: 13465 corp: 5/208b lim: 120 exec/s: 0 rss: 68Mb L: 39/90 MS: 1 CMP- DE: "\377\377\377\365"- 00:08:01.983 [2024-11-09 17:17:21.694018] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13889382724726210752 len:49345 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.983 [2024-11-09 17:17:21.694045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.983 #31 NEW cov: 11888 ft: 13659 corp: 6/251b lim: 120 exec/s: 0 rss: 68Mb L: 43/90 MS: 1 PersAutoDict- DE: "\377\377\377\365"- 00:08:01.983 [2024-11-09 17:17:21.734476] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3070230528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.983 [2024-11-09 17:17:21.734503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.983 [2024-11-09 17:17:21.734545] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.983 [2024-11-09 17:17:21.734561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.983 [2024-11-09 17:17:21.734613] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.983 [2024-11-09 17:17:21.734628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.242 #32 NEW cov: 11888 ft: 13756 corp: 7/341b lim: 120 exec/s: 0 rss: 68Mb L: 90/90 MS: 1 ShuffleBytes- 00:08:02.242 [2024-11-09 17:17:21.784762] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3070230528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.242 [2024-11-09 17:17:21.784790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.242 [2024-11-09 17:17:21.784837] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.242 [2024-11-09 17:17:21.784852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.242 [2024-11-09 17:17:21.784905] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.242 [2024-11-09 17:17:21.784919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.243 [2024-11-09 17:17:21.784973] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.243 [2024-11-09 17:17:21.784988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:02.243 #33 NEW cov: 11888 ft: 14202 corp: 8/452b lim: 120 exec/s: 0 rss: 69Mb L: 111/111 MS: 1 CopyPart- 00:08:02.243 [2024-11-09 17:17:21.834763] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3070230528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.243 [2024-11-09 17:17:21.834790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.243 [2024-11-09 17:17:21.834827] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.243 [2024-11-09 17:17:21.834842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.243 [2024-11-09 17:17:21.834896] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.243 [2024-11-09 17:17:21.834910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.243 #34 NEW cov: 11888 ft: 14269 corp: 9/542b lim: 120 exec/s: 0 rss: 69Mb L: 90/111 MS: 1 ShuffleBytes- 00:08:02.243 [2024-11-09 17:17:21.874561] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13889382724726210752 len:49345 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.243 [2024-11-09 17:17:21.874592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.243 #40 NEW cov: 11888 ft: 14328 corp: 10/568b lim: 120 exec/s: 0 rss: 69Mb L: 26/111 MS: 1 EraseBytes- 00:08:02.243 [2024-11-09 17:17:21.914694] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13889313455493660864 len:62913 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.243 [2024-11-09 17:17:21.914722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.243 #41 NEW cov: 11888 ft: 14488 corp: 11/611b lim: 120 exec/s: 0 rss: 69Mb L: 43/111 MS: 1 PersAutoDict- DE: "\377\377\377\365"- 00:08:02.243 [2024-11-09 17:17:21.955140] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3070230528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.243 [2024-11-09 17:17:21.955167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.243 [2024-11-09 17:17:21.955204] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.243 [2024-11-09 17:17:21.955219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.243 [2024-11-09 17:17:21.955271] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.243 [2024-11-09 17:17:21.955286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.243 #42 NEW cov: 11888 ft: 14569 corp: 12/696b lim: 120 exec/s: 0 rss: 69Mb L: 85/111 MS: 1 EraseBytes- 00:08:02.243 [2024-11-09 17:17:21.995217] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3070230528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.243 [2024-11-09 17:17:21.995244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.243 [2024-11-09 17:17:21.995281] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.243 [2024-11-09 17:17:21.995297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.243 [2024-11-09 17:17:21.995350] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.243 [2024-11-09 17:17:21.995365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.503 #43 NEW cov: 11888 ft: 14584 corp: 13/786b lim: 120 exec/s: 0 rss: 69Mb L: 90/111 MS: 1 CopyPart- 00:08:02.503 [2024-11-09 17:17:22.035460] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3070230528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.503 [2024-11-09 17:17:22.035486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.503 [2024-11-09 17:17:22.035541] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.503 [2024-11-09 17:17:22.035557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.503 [2024-11-09 17:17:22.035607] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4278190080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.503 [2024-11-09 17:17:22.035621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.503 [2024-11-09 17:17:22.035673] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.503 [2024-11-09 17:17:22.035691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:02.503 #44 NEW cov: 11888 ft: 14623 corp: 14/887b lim: 120 exec/s: 0 rss: 69Mb L: 101/111 MS: 1 InsertRepeatedBytes- 00:08:02.503 [2024-11-09 17:17:22.075168] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13889313455493660864 len:62913 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.503 [2024-11-09 17:17:22.075195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.503 #45 NEW cov: 11888 ft: 14680 corp: 15/921b lim: 120 exec/s: 0 rss: 69Mb L: 34/111 MS: 1 EraseBytes- 00:08:02.503 [2024-11-09 17:17:22.115303] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13889382724726210752 len:49345 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.503 [2024-11-09 17:17:22.115331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.503 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:02.503 #46 NEW cov: 11911 ft: 14732 corp: 16/947b lim: 120 exec/s: 0 rss: 69Mb L: 26/111 MS: 1 ChangeByte- 00:08:02.503 [2024-11-09 17:17:22.155425] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13889313184910721216 len:49345 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.503 [2024-11-09 17:17:22.155452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.503 #47 NEW cov: 11911 ft: 14767 corp: 17/986b lim: 120 exec/s: 0 rss: 69Mb L: 39/111 MS: 1 ChangeBit- 00:08:02.503 [2024-11-09 17:17:22.195552] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13889313455493660868 len:62913 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.503 [2024-11-09 17:17:22.195578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.503 #48 NEW cov: 11911 ft: 14816 corp: 18/1020b lim: 120 exec/s: 0 rss: 69Mb L: 34/111 MS: 1 ChangeBit- 00:08:02.503 [2024-11-09 17:17:22.235777] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13889313455493660864 len:62913 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.503 [2024-11-09 17:17:22.235805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.503 [2024-11-09 17:17:22.235844] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13889382724726210752 len:49345 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.503 [2024-11-09 17:17:22.235860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.503 #49 NEW cov: 11911 ft: 15192 corp: 19/1071b lim: 120 exec/s: 49 rss: 69Mb L: 51/111 MS: 1 CMP- DE: "6o\220\240\343\277\204\000"- 00:08:02.763 [2024-11-09 17:17:22.275744] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13889313184910721216 len:49345 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.763 [2024-11-09 17:17:22.275772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.763 #50 NEW cov: 11911 ft: 15248 corp: 20/1110b lim: 120 exec/s: 50 rss: 69Mb L: 39/111 MS: 1 ChangeBit- 00:08:02.763 [2024-11-09 17:17:22.316175] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3070230528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.763 [2024-11-09 17:17:22.316203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.763 [2024-11-09 17:17:22.316239] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.763 [2024-11-09 17:17:22.316257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.763 [2024-11-09 17:17:22.316311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.763 [2024-11-09 17:17:22.316326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.763 #51 NEW cov: 11911 ft: 15255 corp: 21/1200b lim: 120 exec/s: 51 rss: 69Mb L: 90/111 MS: 1 ChangeBinInt- 00:08:02.763 [2024-11-09 17:17:22.355958] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13889313455493660864 len:49345 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.763 [2024-11-09 17:17:22.355985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.763 #52 NEW cov: 11911 ft: 15323 corp: 22/1239b lim: 120 exec/s: 52 rss: 69Mb L: 39/111 MS: 1 EraseBytes- 00:08:02.763 [2024-11-09 17:17:22.396105] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13889313184910721216 len:49345 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.763 [2024-11-09 17:17:22.396133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.763 #53 NEW cov: 11911 ft: 15332 corp: 23/1278b lim: 120 exec/s: 53 rss: 70Mb L: 39/111 MS: 1 ChangeBit- 00:08:02.763 [2024-11-09 17:17:22.436199] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13889313184910721216 len:49345 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.763 [2024-11-09 17:17:22.436227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.763 #54 NEW cov: 11911 ft: 15396 corp: 24/1312b lim: 120 exec/s: 54 rss: 70Mb L: 34/111 MS: 1 EraseBytes- 00:08:02.763 [2024-11-09 17:17:22.476750] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3070230528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.763 [2024-11-09 17:17:22.476777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.763 [2024-11-09 17:17:22.476825] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.763 [2024-11-09 17:17:22.476840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.763 [2024-11-09 17:17:22.476891] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4294967040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.763 [2024-11-09 17:17:22.476906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.763 [2024-11-09 17:17:22.476960] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.763 [2024-11-09 17:17:22.476974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:02.763 #55 NEW cov: 11911 ft: 15413 corp: 25/1413b lim: 120 exec/s: 55 rss: 70Mb L: 101/111 MS: 1 InsertRepeatedBytes- 00:08:02.763 [2024-11-09 17:17:22.526437] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13889313184910721216 len:49345 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.763 [2024-11-09 17:17:22.526464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.022 #56 NEW cov: 11911 ft: 15437 corp: 26/1437b lim: 120 exec/s: 56 rss: 70Mb L: 24/111 MS: 1 EraseBytes- 00:08:03.022 [2024-11-09 17:17:22.566547] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13889313455493660864 len:62913 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.022 [2024-11-09 17:17:22.566574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.022 #57 NEW cov: 11911 ft: 15500 corp: 27/1480b lim: 120 exec/s: 57 rss: 70Mb L: 43/111 MS: 1 ChangeByte- 00:08:03.022 [2024-11-09 17:17:22.606644] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13889382724726210752 len:49345 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.022 [2024-11-09 17:17:22.606672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.022 #58 NEW cov: 11911 ft: 15517 corp: 28/1507b lim: 120 exec/s: 58 rss: 70Mb L: 27/111 MS: 1 InsertByte- 00:08:03.022 [2024-11-09 17:17:22.647082] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3070230528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.022 [2024-11-09 17:17:22.647109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.022 [2024-11-09 17:17:22.647145] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.022 [2024-11-09 17:17:22.647161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.022 [2024-11-09 17:17:22.647215] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.022 [2024-11-09 17:17:22.647231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.022 #59 NEW cov: 11911 ft: 15520 corp: 29/1597b lim: 120 exec/s: 59 rss: 70Mb L: 90/111 MS: 1 ShuffleBytes- 00:08:03.022 [2024-11-09 17:17:22.686918] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13889313184910721216 len:49345 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.022 [2024-11-09 17:17:22.686945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.022 #60 NEW cov: 11911 ft: 15554 corp: 30/1632b lim: 120 exec/s: 60 rss: 70Mb L: 35/111 MS: 1 InsertByte- 00:08:03.022 [2024-11-09 17:17:22.727167] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3070230528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.022 [2024-11-09 17:17:22.727194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.022 [2024-11-09 17:17:22.727245] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.022 [2024-11-09 17:17:22.727261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.022 #61 NEW cov: 11911 ft: 15574 corp: 31/1695b lim: 120 exec/s: 61 rss: 70Mb L: 63/111 MS: 1 EraseBytes- 00:08:03.022 [2024-11-09 17:17:22.767168] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13889382724726210752 len:49345 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.022 [2024-11-09 17:17:22.767195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.022 #62 NEW cov: 11911 ft: 15614 corp: 32/1733b lim: 120 exec/s: 62 rss: 70Mb L: 38/111 MS: 1 CrossOver- 00:08:03.282 [2024-11-09 17:17:22.807316] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13889313455493726400 len:62913 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.282 [2024-11-09 17:17:22.807343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.282 #63 NEW cov: 11911 ft: 15646 corp: 33/1772b lim: 120 exec/s: 63 rss: 70Mb L: 39/111 MS: 1 ChangeBit- 00:08:03.282 [2024-11-09 17:17:22.847377] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13889313455493660864 len:62913 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.282 [2024-11-09 17:17:22.847404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.282 #64 NEW cov: 11911 ft: 15656 corp: 34/1806b lim: 120 exec/s: 64 rss: 70Mb L: 34/111 MS: 1 ChangeBinInt- 00:08:03.282 [2024-11-09 17:17:22.887655] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13889313184910721216 len:49345 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.282 [2024-11-09 17:17:22.887681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.282 [2024-11-09 17:17:22.887738] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13889313184910721218 len:49345 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.282 [2024-11-09 17:17:22.887755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.282 #65 NEW cov: 11911 ft: 15679 corp: 35/1870b lim: 120 exec/s: 65 rss: 70Mb L: 64/111 MS: 1 CopyPart- 00:08:03.282 [2024-11-09 17:17:22.927596] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18443859783065862143 len:49408 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.282 [2024-11-09 17:17:22.927623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.282 #66 NEW cov: 11911 ft: 15696 corp: 36/1913b lim: 120 exec/s: 66 rss: 70Mb L: 43/111 MS: 1 PersAutoDict- DE: "\377\377\377\365"- 00:08:03.282 [2024-11-09 17:17:22.967710] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13889313455493660864 len:62913 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.282 [2024-11-09 17:17:22.967737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.282 #67 NEW cov: 11911 ft: 15706 corp: 37/1952b lim: 120 exec/s: 67 rss: 70Mb L: 39/111 MS: 1 ChangeByte- 00:08:03.282 [2024-11-09 17:17:22.997808] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13889313185971830773 len:49345 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.282 [2024-11-09 17:17:22.997835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.282 #68 NEW cov: 11911 ft: 15744 corp: 38/1989b lim: 120 exec/s: 68 rss: 70Mb L: 37/111 MS: 1 EraseBytes- 00:08:03.282 [2024-11-09 17:17:23.037918] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13889313455493660864 len:62919 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.282 [2024-11-09 17:17:23.037945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.542 #69 NEW cov: 11911 ft: 15759 corp: 39/2023b lim: 120 exec/s: 69 rss: 70Mb L: 34/111 MS: 1 ChangeBinInt- 00:08:03.542 [2024-11-09 17:17:23.078490] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3070230528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.542 [2024-11-09 17:17:23.078518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.542 [2024-11-09 17:17:23.078564] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.542 [2024-11-09 17:17:23.078577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.542 [2024-11-09 17:17:23.078630] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.542 [2024-11-09 17:17:23.078644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.542 [2024-11-09 17:17:23.078698] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.542 [2024-11-09 17:17:23.078713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.542 #75 NEW cov: 11911 ft: 15764 corp: 40/2134b lim: 120 exec/s: 75 rss: 70Mb L: 111/111 MS: 1 CrossOver- 00:08:03.542 [2024-11-09 17:17:23.118140] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14249601425683300544 len:62913 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.542 [2024-11-09 17:17:23.118167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.542 #76 NEW cov: 11911 ft: 15779 corp: 41/2177b lim: 120 exec/s: 76 rss: 70Mb L: 43/111 MS: 1 ChangeBinInt- 00:08:03.542 [2024-11-09 17:17:23.158431] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13889313184910721216 len:49345 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.542 [2024-11-09 17:17:23.158458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.542 [2024-11-09 17:17:23.158499] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14033428372986577088 len:49345 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.542 [2024-11-09 17:17:23.158515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.542 #77 NEW cov: 11911 ft: 15810 corp: 42/2242b lim: 120 exec/s: 77 rss: 70Mb L: 65/111 MS: 1 InsertByte- 00:08:03.542 [2024-11-09 17:17:23.198359] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13889382724726210752 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.542 [2024-11-09 17:17:23.198386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.542 #78 NEW cov: 11911 ft: 15821 corp: 43/2268b lim: 120 exec/s: 78 rss: 70Mb L: 26/111 MS: 1 CopyPart- 00:08:03.542 [2024-11-09 17:17:23.238455] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16410980703609524384 len:62919 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.542 [2024-11-09 17:17:23.238483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.542 #79 NEW cov: 11911 ft: 15831 corp: 44/2302b lim: 120 exec/s: 39 rss: 70Mb L: 34/111 MS: 1 PersAutoDict- DE: "6o\220\240\343\277\204\000"- 00:08:03.542 #79 DONE cov: 11911 ft: 15831 corp: 44/2302b lim: 120 exec/s: 39 rss: 70Mb 00:08:03.542 ###### Recommended dictionary. ###### 00:08:03.542 "\377\377\377\365" # Uses: 3 00:08:03.542 "6o\220\240\343\277\204\000" # Uses: 2 00:08:03.542 ###### End of recommended dictionary. ###### 00:08:03.542 Done 79 runs in 2 second(s) 00:08:03.802 17:17:23 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_17.conf 00:08:03.802 17:17:23 -- ../common.sh@72 -- # (( i++ )) 00:08:03.802 17:17:23 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:03.802 17:17:23 -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:08:03.802 17:17:23 -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:08:03.802 17:17:23 -- nvmf/run.sh@24 -- # local timen=1 00:08:03.802 17:17:23 -- nvmf/run.sh@25 -- # local core=0x1 00:08:03.802 17:17:23 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:03.802 17:17:23 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:08:03.802 17:17:23 -- nvmf/run.sh@29 -- # printf %02d 18 00:08:03.802 17:17:23 -- nvmf/run.sh@29 -- # port=4418 00:08:03.802 17:17:23 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:03.802 17:17:23 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:08:03.802 17:17:23 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:03.802 17:17:23 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 -r /var/tmp/spdk18.sock 00:08:03.802 [2024-11-09 17:17:23.428958] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:03.802 [2024-11-09 17:17:23.429026] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid373824 ] 00:08:03.802 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.061 [2024-11-09 17:17:23.601669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.061 [2024-11-09 17:17:23.665816] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:04.062 [2024-11-09 17:17:23.665938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.062 [2024-11-09 17:17:23.723678] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.062 [2024-11-09 17:17:23.740044] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:08:04.062 INFO: Running with entropic power schedule (0xFF, 100). 00:08:04.062 INFO: Seed: 3141315395 00:08:04.062 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:04.062 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:04.062 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:04.062 INFO: A corpus is not provided, starting from an empty corpus 00:08:04.062 #2 INITED exec/s: 0 rss: 60Mb 00:08:04.062 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:04.062 This may also happen if the target rejected all inputs we tried so far 00:08:04.062 [2024-11-09 17:17:23.795250] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.062 [2024-11-09 17:17:23.795280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.062 [2024-11-09 17:17:23.795320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.062 [2024-11-09 17:17:23.795333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.062 [2024-11-09 17:17:23.795384] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.062 [2024-11-09 17:17:23.795397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.321 NEW_FUNC[1/670]: 0x457e58 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:08:04.321 NEW_FUNC[2/670]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:04.321 #78 NEW cov: 11628 ft: 11616 corp: 2/76b lim: 100 exec/s: 0 rss: 68Mb L: 75/75 MS: 1 InsertRepeatedBytes- 00:08:04.581 [2024-11-09 17:17:24.096002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.582 [2024-11-09 17:17:24.096036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.582 [2024-11-09 17:17:24.096088] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.582 [2024-11-09 17:17:24.096102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.582 [2024-11-09 17:17:24.096151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.582 [2024-11-09 17:17:24.096164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.582 #79 NEW cov: 11741 ft: 12176 corp: 3/152b lim: 100 exec/s: 0 rss: 68Mb L: 76/76 MS: 1 InsertByte- 00:08:04.582 [2024-11-09 17:17:24.146116] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.582 [2024-11-09 17:17:24.146143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.582 [2024-11-09 17:17:24.146211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.582 [2024-11-09 17:17:24.146226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.582 [2024-11-09 17:17:24.146278] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.582 [2024-11-09 17:17:24.146291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.582 #80 NEW cov: 11747 ft: 12350 corp: 4/227b lim: 100 exec/s: 0 rss: 68Mb L: 75/76 MS: 1 ChangeBinInt- 00:08:04.582 [2024-11-09 17:17:24.186200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.582 [2024-11-09 17:17:24.186229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.582 [2024-11-09 17:17:24.186263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.582 [2024-11-09 17:17:24.186276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.582 [2024-11-09 17:17:24.186330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.582 [2024-11-09 17:17:24.186344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.582 #81 NEW cov: 11832 ft: 12632 corp: 5/302b lim: 100 exec/s: 0 rss: 68Mb L: 75/76 MS: 1 ShuffleBytes- 00:08:04.582 [2024-11-09 17:17:24.226182] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.582 [2024-11-09 17:17:24.226208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.582 [2024-11-09 17:17:24.226247] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.582 [2024-11-09 17:17:24.226260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.582 #82 NEW cov: 11832 ft: 13032 corp: 6/346b lim: 100 exec/s: 0 rss: 68Mb L: 44/76 MS: 1 EraseBytes- 00:08:04.582 [2024-11-09 17:17:24.266502] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.582 [2024-11-09 17:17:24.266532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.582 [2024-11-09 17:17:24.266583] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.582 [2024-11-09 17:17:24.266597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.582 [2024-11-09 17:17:24.266647] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.582 [2024-11-09 17:17:24.266660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.582 [2024-11-09 17:17:24.266713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:04.582 [2024-11-09 17:17:24.266726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.582 #83 NEW cov: 11832 ft: 13391 corp: 7/433b lim: 100 exec/s: 0 rss: 68Mb L: 87/87 MS: 1 InsertRepeatedBytes- 00:08:04.582 [2024-11-09 17:17:24.306448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.582 [2024-11-09 17:17:24.306474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.582 [2024-11-09 17:17:24.306530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.582 [2024-11-09 17:17:24.306545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.582 #84 NEW cov: 11832 ft: 13448 corp: 8/477b lim: 100 exec/s: 0 rss: 68Mb L: 44/87 MS: 1 ChangeBinInt- 00:08:04.582 [2024-11-09 17:17:24.346792] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.582 [2024-11-09 17:17:24.346817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.582 [2024-11-09 17:17:24.346867] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.582 [2024-11-09 17:17:24.346880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.582 [2024-11-09 17:17:24.346932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.582 [2024-11-09 17:17:24.346944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.582 [2024-11-09 17:17:24.346997] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:04.582 [2024-11-09 17:17:24.347010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.842 #85 NEW cov: 11832 ft: 13480 corp: 9/564b lim: 100 exec/s: 0 rss: 68Mb L: 87/87 MS: 1 CrossOver- 00:08:04.842 [2024-11-09 17:17:24.386871] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.842 [2024-11-09 17:17:24.386897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.842 [2024-11-09 17:17:24.386945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.842 [2024-11-09 17:17:24.386958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.842 [2024-11-09 17:17:24.387009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.842 [2024-11-09 17:17:24.387023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.842 [2024-11-09 17:17:24.387073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:04.842 [2024-11-09 17:17:24.387087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.842 #86 NEW cov: 11832 ft: 13536 corp: 10/662b lim: 100 exec/s: 0 rss: 68Mb L: 98/98 MS: 1 CrossOver- 00:08:04.842 [2024-11-09 17:17:24.426999] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.842 [2024-11-09 17:17:24.427024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.842 [2024-11-09 17:17:24.427067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.842 [2024-11-09 17:17:24.427081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.842 [2024-11-09 17:17:24.427133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.842 [2024-11-09 17:17:24.427147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.842 [2024-11-09 17:17:24.427200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:04.842 [2024-11-09 17:17:24.427213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.842 #87 NEW cov: 11832 ft: 13581 corp: 11/749b lim: 100 exec/s: 0 rss: 68Mb L: 87/98 MS: 1 ChangeByte- 00:08:04.842 [2024-11-09 17:17:24.466909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.842 [2024-11-09 17:17:24.466938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.842 [2024-11-09 17:17:24.466994] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.842 [2024-11-09 17:17:24.467008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.842 #88 NEW cov: 11832 ft: 13639 corp: 12/793b lim: 100 exec/s: 0 rss: 68Mb L: 44/98 MS: 1 ChangeBinInt- 00:08:04.842 [2024-11-09 17:17:24.507221] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.842 [2024-11-09 17:17:24.507247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.842 [2024-11-09 17:17:24.507296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.842 [2024-11-09 17:17:24.507309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.842 [2024-11-09 17:17:24.507354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.842 [2024-11-09 17:17:24.507367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.842 [2024-11-09 17:17:24.507419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:04.842 [2024-11-09 17:17:24.507433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.842 #89 NEW cov: 11832 ft: 13694 corp: 13/873b lim: 100 exec/s: 0 rss: 68Mb L: 80/98 MS: 1 EraseBytes- 00:08:04.842 [2024-11-09 17:17:24.547227] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.842 [2024-11-09 17:17:24.547251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.842 [2024-11-09 17:17:24.547296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.842 [2024-11-09 17:17:24.547310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.842 [2024-11-09 17:17:24.547362] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.842 [2024-11-09 17:17:24.547376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.842 #90 NEW cov: 11832 ft: 13728 corp: 14/949b lim: 100 exec/s: 0 rss: 68Mb L: 76/98 MS: 1 ChangeBinInt- 00:08:04.842 [2024-11-09 17:17:24.587515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.842 [2024-11-09 17:17:24.587545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.842 [2024-11-09 17:17:24.587592] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.842 [2024-11-09 17:17:24.587606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.842 [2024-11-09 17:17:24.587650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.842 [2024-11-09 17:17:24.587664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.842 [2024-11-09 17:17:24.587717] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:04.842 [2024-11-09 17:17:24.587730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.102 #91 NEW cov: 11832 ft: 13759 corp: 15/1029b lim: 100 exec/s: 0 rss: 68Mb L: 80/98 MS: 1 CopyPart- 00:08:05.102 [2024-11-09 17:17:24.627633] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.102 [2024-11-09 17:17:24.627661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.102 [2024-11-09 17:17:24.627697] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.102 [2024-11-09 17:17:24.627711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.102 [2024-11-09 17:17:24.627762] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:05.102 [2024-11-09 17:17:24.627775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.102 [2024-11-09 17:17:24.627841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:05.102 [2024-11-09 17:17:24.627856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.102 #92 NEW cov: 11832 ft: 13788 corp: 16/1117b lim: 100 exec/s: 0 rss: 69Mb L: 88/98 MS: 1 CopyPart- 00:08:05.102 [2024-11-09 17:17:24.667484] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.102 [2024-11-09 17:17:24.667509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.102 [2024-11-09 17:17:24.667551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.102 [2024-11-09 17:17:24.667564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.102 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:05.102 #93 NEW cov: 11855 ft: 13842 corp: 17/1161b lim: 100 exec/s: 0 rss: 69Mb L: 44/98 MS: 1 ChangeBinInt- 00:08:05.102 [2024-11-09 17:17:24.707836] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.102 [2024-11-09 17:17:24.707861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.102 [2024-11-09 17:17:24.707908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.102 [2024-11-09 17:17:24.707922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.102 [2024-11-09 17:17:24.707973] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:05.102 [2024-11-09 17:17:24.707986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.102 [2024-11-09 17:17:24.708037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:05.102 [2024-11-09 17:17:24.708051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.102 #94 NEW cov: 11855 ft: 13866 corp: 18/1255b lim: 100 exec/s: 0 rss: 69Mb L: 94/98 MS: 1 CopyPart- 00:08:05.102 [2024-11-09 17:17:24.747926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.102 [2024-11-09 17:17:24.747951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.102 [2024-11-09 17:17:24.748002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.102 [2024-11-09 17:17:24.748016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.102 [2024-11-09 17:17:24.748067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:05.102 [2024-11-09 17:17:24.748081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.102 [2024-11-09 17:17:24.748134] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:05.102 [2024-11-09 17:17:24.748147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.102 #95 NEW cov: 11855 ft: 13967 corp: 19/1354b lim: 100 exec/s: 95 rss: 69Mb L: 99/99 MS: 1 CrossOver- 00:08:05.102 [2024-11-09 17:17:24.787861] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.102 [2024-11-09 17:17:24.787886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.102 [2024-11-09 17:17:24.787935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.102 [2024-11-09 17:17:24.787949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.102 #96 NEW cov: 11855 ft: 13982 corp: 20/1399b lim: 100 exec/s: 96 rss: 69Mb L: 45/99 MS: 1 EraseBytes- 00:08:05.102 [2024-11-09 17:17:24.827972] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.102 [2024-11-09 17:17:24.827997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.102 [2024-11-09 17:17:24.828030] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.102 [2024-11-09 17:17:24.828043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.102 #97 NEW cov: 11855 ft: 14000 corp: 21/1443b lim: 100 exec/s: 97 rss: 69Mb L: 44/99 MS: 1 ChangeBinInt- 00:08:05.102 [2024-11-09 17:17:24.868373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.103 [2024-11-09 17:17:24.868399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.103 [2024-11-09 17:17:24.868449] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.103 [2024-11-09 17:17:24.868464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.103 [2024-11-09 17:17:24.868515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:05.103 [2024-11-09 17:17:24.868536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.103 [2024-11-09 17:17:24.868588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:05.103 [2024-11-09 17:17:24.868602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.362 #98 NEW cov: 11855 ft: 14046 corp: 22/1541b lim: 100 exec/s: 98 rss: 69Mb L: 98/99 MS: 1 CopyPart- 00:08:05.362 [2024-11-09 17:17:24.908439] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.362 [2024-11-09 17:17:24.908465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.362 [2024-11-09 17:17:24.908514] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.362 [2024-11-09 17:17:24.908532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.362 [2024-11-09 17:17:24.908584] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:05.362 [2024-11-09 17:17:24.908597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.362 [2024-11-09 17:17:24.908668] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:05.362 [2024-11-09 17:17:24.908682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.362 #99 NEW cov: 11855 ft: 14063 corp: 23/1628b lim: 100 exec/s: 99 rss: 69Mb L: 87/99 MS: 1 ChangeBinInt- 00:08:05.362 [2024-11-09 17:17:24.948326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.362 [2024-11-09 17:17:24.948352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.362 [2024-11-09 17:17:24.948398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.362 [2024-11-09 17:17:24.948412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.362 #100 NEW cov: 11855 ft: 14080 corp: 24/1672b lim: 100 exec/s: 100 rss: 69Mb L: 44/99 MS: 1 ChangeBinInt- 00:08:05.362 [2024-11-09 17:17:24.988457] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.362 [2024-11-09 17:17:24.988481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.362 [2024-11-09 17:17:24.988549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.362 [2024-11-09 17:17:24.988562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.362 #101 NEW cov: 11855 ft: 14121 corp: 25/1718b lim: 100 exec/s: 101 rss: 69Mb L: 46/99 MS: 1 InsertByte- 00:08:05.362 [2024-11-09 17:17:25.028575] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.362 [2024-11-09 17:17:25.028601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.362 [2024-11-09 17:17:25.028637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.362 [2024-11-09 17:17:25.028649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.362 #102 NEW cov: 11855 ft: 14149 corp: 26/1762b lim: 100 exec/s: 102 rss: 69Mb L: 44/99 MS: 1 CopyPart- 00:08:05.362 [2024-11-09 17:17:25.068903] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.362 [2024-11-09 17:17:25.068928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.362 [2024-11-09 17:17:25.068992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.362 [2024-11-09 17:17:25.069006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.362 [2024-11-09 17:17:25.069058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:05.363 [2024-11-09 17:17:25.069072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.363 [2024-11-09 17:17:25.069124] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:05.363 [2024-11-09 17:17:25.069138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.363 #103 NEW cov: 11855 ft: 14169 corp: 27/1861b lim: 100 exec/s: 103 rss: 69Mb L: 99/99 MS: 1 CrossOver- 00:08:05.363 [2024-11-09 17:17:25.108993] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.363 [2024-11-09 17:17:25.109019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.363 [2024-11-09 17:17:25.109064] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.363 [2024-11-09 17:17:25.109078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.363 [2024-11-09 17:17:25.109129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:05.363 [2024-11-09 17:17:25.109146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.363 [2024-11-09 17:17:25.109198] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:05.363 [2024-11-09 17:17:25.109213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.622 #104 NEW cov: 11855 ft: 14174 corp: 28/1955b lim: 100 exec/s: 104 rss: 69Mb L: 94/99 MS: 1 ChangeBit- 00:08:05.622 [2024-11-09 17:17:25.149138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.622 [2024-11-09 17:17:25.149162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.622 [2024-11-09 17:17:25.149211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.622 [2024-11-09 17:17:25.149224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.622 [2024-11-09 17:17:25.149274] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:05.622 [2024-11-09 17:17:25.149299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.622 [2024-11-09 17:17:25.149349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:05.622 [2024-11-09 17:17:25.149363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.622 #105 NEW cov: 11855 ft: 14190 corp: 29/2054b lim: 100 exec/s: 105 rss: 70Mb L: 99/99 MS: 1 CrossOver- 00:08:05.622 [2024-11-09 17:17:25.189251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.622 [2024-11-09 17:17:25.189277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.622 [2024-11-09 17:17:25.189324] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.622 [2024-11-09 17:17:25.189337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.622 [2024-11-09 17:17:25.189388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:05.622 [2024-11-09 17:17:25.189418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.622 [2024-11-09 17:17:25.189470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:05.622 [2024-11-09 17:17:25.189483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.622 #111 NEW cov: 11855 ft: 14201 corp: 30/2153b lim: 100 exec/s: 111 rss: 70Mb L: 99/99 MS: 1 ChangeByte- 00:08:05.622 [2024-11-09 17:17:25.229124] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.622 [2024-11-09 17:17:25.229149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.622 [2024-11-09 17:17:25.229196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.622 [2024-11-09 17:17:25.229210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.622 #112 NEW cov: 11855 ft: 14214 corp: 31/2206b lim: 100 exec/s: 112 rss: 70Mb L: 53/99 MS: 1 EraseBytes- 00:08:05.622 [2024-11-09 17:17:25.269141] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.622 [2024-11-09 17:17:25.269167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.622 #113 NEW cov: 11855 ft: 14604 corp: 32/2244b lim: 100 exec/s: 113 rss: 70Mb L: 38/99 MS: 1 EraseBytes- 00:08:05.622 [2024-11-09 17:17:25.309398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.622 [2024-11-09 17:17:25.309424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.622 [2024-11-09 17:17:25.309472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.622 [2024-11-09 17:17:25.309486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.622 #114 NEW cov: 11855 ft: 14740 corp: 33/2288b lim: 100 exec/s: 114 rss: 70Mb L: 44/99 MS: 1 ChangeByte- 00:08:05.622 [2024-11-09 17:17:25.349488] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.622 [2024-11-09 17:17:25.349514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.622 [2024-11-09 17:17:25.349556] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.622 [2024-11-09 17:17:25.349570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.622 #115 NEW cov: 11855 ft: 14786 corp: 34/2336b lim: 100 exec/s: 115 rss: 70Mb L: 48/99 MS: 1 InsertRepeatedBytes- 00:08:05.622 [2024-11-09 17:17:25.389536] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.622 [2024-11-09 17:17:25.389562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.882 #116 NEW cov: 11855 ft: 14841 corp: 35/2374b lim: 100 exec/s: 116 rss: 70Mb L: 38/99 MS: 1 ChangeBinInt- 00:08:05.882 [2024-11-09 17:17:25.429765] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.882 [2024-11-09 17:17:25.429791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.882 [2024-11-09 17:17:25.429840] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.882 [2024-11-09 17:17:25.429855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.882 #117 NEW cov: 11855 ft: 14853 corp: 36/2419b lim: 100 exec/s: 117 rss: 70Mb L: 45/99 MS: 1 CopyPart- 00:08:05.882 [2024-11-09 17:17:25.470110] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.882 [2024-11-09 17:17:25.470136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.882 [2024-11-09 17:17:25.470185] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.882 [2024-11-09 17:17:25.470200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.882 [2024-11-09 17:17:25.470253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:05.882 [2024-11-09 17:17:25.470267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.882 [2024-11-09 17:17:25.470318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:05.882 [2024-11-09 17:17:25.470332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.882 #118 NEW cov: 11855 ft: 14935 corp: 37/2507b lim: 100 exec/s: 118 rss: 70Mb L: 88/99 MS: 1 InsertByte- 00:08:05.882 [2024-11-09 17:17:25.510079] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.882 [2024-11-09 17:17:25.510105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.882 [2024-11-09 17:17:25.510142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.882 [2024-11-09 17:17:25.510156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.882 [2024-11-09 17:17:25.510209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:05.882 [2024-11-09 17:17:25.510224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.882 #119 NEW cov: 11855 ft: 14963 corp: 38/2583b lim: 100 exec/s: 119 rss: 70Mb L: 76/99 MS: 1 ChangeByte- 00:08:05.882 [2024-11-09 17:17:25.540151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.882 [2024-11-09 17:17:25.540176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.882 [2024-11-09 17:17:25.540224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.882 [2024-11-09 17:17:25.540238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.882 [2024-11-09 17:17:25.540290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:05.882 [2024-11-09 17:17:25.540304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.882 #120 NEW cov: 11855 ft: 15003 corp: 39/2658b lim: 100 exec/s: 120 rss: 70Mb L: 75/99 MS: 1 ShuffleBytes- 00:08:05.882 [2024-11-09 17:17:25.580168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.882 [2024-11-09 17:17:25.580194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.882 [2024-11-09 17:17:25.580229] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.882 [2024-11-09 17:17:25.580243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.882 #121 NEW cov: 11855 ft: 15004 corp: 40/2702b lim: 100 exec/s: 121 rss: 70Mb L: 44/99 MS: 1 ShuffleBytes- 00:08:05.882 [2024-11-09 17:17:25.620617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.882 [2024-11-09 17:17:25.620643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.882 [2024-11-09 17:17:25.620698] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.882 [2024-11-09 17:17:25.620711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.882 [2024-11-09 17:17:25.620762] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:05.882 [2024-11-09 17:17:25.620776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.882 [2024-11-09 17:17:25.620828] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:05.882 [2024-11-09 17:17:25.620841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.882 [2024-11-09 17:17:25.620895] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:08:05.882 [2024-11-09 17:17:25.620908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:05.882 #122 NEW cov: 11855 ft: 15033 corp: 41/2802b lim: 100 exec/s: 122 rss: 70Mb L: 100/100 MS: 1 InsertByte- 00:08:06.142 [2024-11-09 17:17:25.660369] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:06.142 [2024-11-09 17:17:25.660394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.142 [2024-11-09 17:17:25.660443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:06.142 [2024-11-09 17:17:25.660456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.142 #123 NEW cov: 11855 ft: 15055 corp: 42/2846b lim: 100 exec/s: 123 rss: 70Mb L: 44/100 MS: 1 ChangeBit- 00:08:06.142 [2024-11-09 17:17:25.700483] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:06.142 [2024-11-09 17:17:25.700507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.142 [2024-11-09 17:17:25.700548] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:06.142 [2024-11-09 17:17:25.700562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.142 #124 NEW cov: 11855 ft: 15065 corp: 43/2899b lim: 100 exec/s: 124 rss: 70Mb L: 53/100 MS: 1 ChangeBit- 00:08:06.142 [2024-11-09 17:17:25.740629] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:06.142 [2024-11-09 17:17:25.740655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.142 [2024-11-09 17:17:25.740705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:06.142 [2024-11-09 17:17:25.740719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.142 #125 NEW cov: 11855 ft: 15078 corp: 44/2943b lim: 100 exec/s: 125 rss: 70Mb L: 44/100 MS: 1 ChangeByte- 00:08:06.142 [2024-11-09 17:17:25.780873] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:06.142 [2024-11-09 17:17:25.780898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.142 [2024-11-09 17:17:25.780931] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:06.142 [2024-11-09 17:17:25.780945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.142 [2024-11-09 17:17:25.780999] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:06.142 [2024-11-09 17:17:25.781013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.142 #126 NEW cov: 11855 ft: 15086 corp: 45/3013b lim: 100 exec/s: 63 rss: 70Mb L: 70/100 MS: 1 InsertRepeatedBytes- 00:08:06.142 #126 DONE cov: 11855 ft: 15086 corp: 45/3013b lim: 100 exec/s: 63 rss: 70Mb 00:08:06.142 Done 126 runs in 2 second(s) 00:08:06.401 17:17:25 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_18.conf 00:08:06.401 17:17:25 -- ../common.sh@72 -- # (( i++ )) 00:08:06.402 17:17:25 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:06.402 17:17:25 -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:08:06.402 17:17:25 -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:08:06.402 17:17:25 -- nvmf/run.sh@24 -- # local timen=1 00:08:06.402 17:17:25 -- nvmf/run.sh@25 -- # local core=0x1 00:08:06.402 17:17:25 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:06.402 17:17:25 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:08:06.402 17:17:25 -- nvmf/run.sh@29 -- # printf %02d 19 00:08:06.402 17:17:25 -- nvmf/run.sh@29 -- # port=4419 00:08:06.402 17:17:25 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:06.402 17:17:25 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:08:06.402 17:17:25 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:06.402 17:17:25 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 -r /var/tmp/spdk19.sock 00:08:06.402 [2024-11-09 17:17:25.975262] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:06.402 [2024-11-09 17:17:25.975353] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid374360 ] 00:08:06.402 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.402 [2024-11-09 17:17:26.150843] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.661 [2024-11-09 17:17:26.214985] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:06.661 [2024-11-09 17:17:26.215107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.661 [2024-11-09 17:17:26.272894] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.661 [2024-11-09 17:17:26.289258] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:08:06.661 INFO: Running with entropic power schedule (0xFF, 100). 00:08:06.661 INFO: Seed: 1395341240 00:08:06.661 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:06.661 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:06.661 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:06.661 INFO: A corpus is not provided, starting from an empty corpus 00:08:06.661 #2 INITED exec/s: 0 rss: 60Mb 00:08:06.661 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:06.661 This may also happen if the target rejected all inputs we tried so far 00:08:06.661 [2024-11-09 17:17:26.334183] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744070136004607 len:65536 00:08:06.661 [2024-11-09 17:17:26.334214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.955 NEW_FUNC[1/670]: 0x45ae18 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:08:06.955 NEW_FUNC[2/670]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:06.955 #6 NEW cov: 11600 ft: 11607 corp: 2/13b lim: 50 exec/s: 0 rss: 68Mb L: 12/12 MS: 4 ChangeBit-CrossOver-ChangeByte-InsertRepeatedBytes- 00:08:06.955 [2024-11-09 17:17:26.635150] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:36752 00:08:06.955 [2024-11-09 17:17:26.635184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.955 [2024-11-09 17:17:26.635233] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 00:08:06.955 [2024-11-09 17:17:26.635248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.955 [2024-11-09 17:17:26.635299] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 00:08:06.955 [2024-11-09 17:17:26.635314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.955 #17 NEW cov: 11719 ft: 12365 corp: 3/50b lim: 50 exec/s: 0 rss: 68Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:08:06.955 [2024-11-09 17:17:26.685216] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:36752 00:08:06.955 [2024-11-09 17:17:26.685244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.955 [2024-11-09 17:17:26.685285] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 00:08:06.955 [2024-11-09 17:17:26.685300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.955 [2024-11-09 17:17:26.685347] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 00:08:06.955 [2024-11-09 17:17:26.685362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.955 #18 NEW cov: 11725 ft: 12577 corp: 4/87b lim: 50 exec/s: 0 rss: 68Mb L: 37/37 MS: 1 ShuffleBytes- 00:08:07.214 [2024-11-09 17:17:26.725345] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:36752 00:08:07.214 [2024-11-09 17:17:26.725372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.214 [2024-11-09 17:17:26.725423] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 00:08:07.214 [2024-11-09 17:17:26.725438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.214 [2024-11-09 17:17:26.725486] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 00:08:07.214 [2024-11-09 17:17:26.725501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.214 #19 NEW cov: 11810 ft: 12879 corp: 5/124b lim: 50 exec/s: 0 rss: 68Mb L: 37/37 MS: 1 ChangeByte- 00:08:07.214 [2024-11-09 17:17:26.765453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:36752 00:08:07.214 [2024-11-09 17:17:26.765480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.214 [2024-11-09 17:17:26.765513] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10321000817801269135 len:36752 00:08:07.214 [2024-11-09 17:17:26.765532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.214 [2024-11-09 17:17:26.765582] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 00:08:07.214 [2024-11-09 17:17:26.765596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.214 #20 NEW cov: 11810 ft: 12908 corp: 6/162b lim: 50 exec/s: 0 rss: 68Mb L: 38/38 MS: 1 InsertByte- 00:08:07.214 [2024-11-09 17:17:26.805532] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:36752 00:08:07.214 [2024-11-09 17:17:26.805558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.215 [2024-11-09 17:17:26.805599] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10321000817801269135 len:36752 00:08:07.215 [2024-11-09 17:17:26.805613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.215 [2024-11-09 17:17:26.805661] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 00:08:07.215 [2024-11-09 17:17:26.805676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.215 #21 NEW cov: 11810 ft: 13025 corp: 7/200b lim: 50 exec/s: 0 rss: 68Mb L: 38/38 MS: 1 ChangeBit- 00:08:07.215 [2024-11-09 17:17:26.845676] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:36752 00:08:07.215 [2024-11-09 17:17:26.845705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.215 [2024-11-09 17:17:26.845740] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10321000817801269135 len:36752 00:08:07.215 [2024-11-09 17:17:26.845754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.215 [2024-11-09 17:17:26.845802] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:637534208 len:36752 00:08:07.215 [2024-11-09 17:17:26.845817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.215 #22 NEW cov: 11810 ft: 13101 corp: 8/238b lim: 50 exec/s: 0 rss: 68Mb L: 38/38 MS: 1 ChangeBinInt- 00:08:07.215 [2024-11-09 17:17:26.885527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744070457589759 len:65536 00:08:07.215 [2024-11-09 17:17:26.885554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.215 #23 NEW cov: 11810 ft: 13161 corp: 9/251b lim: 50 exec/s: 0 rss: 68Mb L: 13/38 MS: 1 InsertByte- 00:08:07.215 [2024-11-09 17:17:26.925882] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:36752 00:08:07.215 [2024-11-09 17:17:26.925910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.215 [2024-11-09 17:17:26.925944] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10321000817801269135 len:36752 00:08:07.215 [2024-11-09 17:17:26.925959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.215 [2024-11-09 17:17:26.926010] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 00:08:07.215 [2024-11-09 17:17:26.926024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.215 #24 NEW cov: 11810 ft: 13186 corp: 10/289b lim: 50 exec/s: 0 rss: 68Mb L: 38/38 MS: 1 ChangeBit- 00:08:07.215 [2024-11-09 17:17:26.965808] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744070457589759 len:63488 00:08:07.215 [2024-11-09 17:17:26.965836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.474 #25 NEW cov: 11810 ft: 13338 corp: 11/302b lim: 50 exec/s: 0 rss: 69Mb L: 13/38 MS: 1 ChangeBit- 00:08:07.474 [2024-11-09 17:17:27.006260] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:1 00:08:07.474 [2024-11-09 17:17:27.006288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.474 [2024-11-09 17:17:27.006330] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10344644713436974991 len:36668 00:08:07.474 [2024-11-09 17:17:27.006345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.475 [2024-11-09 17:17:27.006394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 00:08:07.475 [2024-11-09 17:17:27.006410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.475 [2024-11-09 17:17:27.006460] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:10232178351499349903 len:65536 00:08:07.475 [2024-11-09 17:17:27.006474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:07.475 #26 NEW cov: 11810 ft: 13600 corp: 12/344b lim: 50 exec/s: 0 rss: 69Mb L: 42/42 MS: 1 CMP- DE: "\000\000\000\010"- 00:08:07.475 [2024-11-09 17:17:27.046258] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:36752 00:08:07.475 [2024-11-09 17:17:27.046285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.475 [2024-11-09 17:17:27.046322] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10359809180215250831 len:36752 00:08:07.475 [2024-11-09 17:17:27.046337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.475 [2024-11-09 17:17:27.046386] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 00:08:07.475 [2024-11-09 17:17:27.046401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.475 #27 NEW cov: 11810 ft: 13638 corp: 13/382b lim: 50 exec/s: 0 rss: 69Mb L: 38/42 MS: 1 ChangeBinInt- 00:08:07.475 [2024-11-09 17:17:27.086368] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:36752 00:08:07.475 [2024-11-09 17:17:27.086395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.475 [2024-11-09 17:17:27.086429] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 00:08:07.475 [2024-11-09 17:17:27.086444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.475 [2024-11-09 17:17:27.086494] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36710 00:08:07.475 [2024-11-09 17:17:27.086509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.475 #28 NEW cov: 11810 ft: 13664 corp: 14/420b lim: 50 exec/s: 0 rss: 69Mb L: 38/42 MS: 1 InsertByte- 00:08:07.475 [2024-11-09 17:17:27.126243] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744070136004607 len:65536 00:08:07.475 [2024-11-09 17:17:27.126269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.475 #29 NEW cov: 11810 ft: 13714 corp: 15/432b lim: 50 exec/s: 0 rss: 69Mb L: 12/42 MS: 1 CrossOver- 00:08:07.475 [2024-11-09 17:17:27.166586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:36752 00:08:07.475 [2024-11-09 17:17:27.166614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.475 [2024-11-09 17:17:27.166649] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 00:08:07.475 [2024-11-09 17:17:27.166663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.475 [2024-11-09 17:17:27.166713] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10336166248539197327 len:28827 00:08:07.475 [2024-11-09 17:17:27.166728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.475 #30 NEW cov: 11810 ft: 13822 corp: 16/470b lim: 50 exec/s: 0 rss: 69Mb L: 38/42 MS: 1 ChangeBinInt- 00:08:07.475 [2024-11-09 17:17:27.206507] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744070136004607 len:65536 00:08:07.475 [2024-11-09 17:17:27.206539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.475 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:07.475 #31 NEW cov: 11833 ft: 13885 corp: 17/482b lim: 50 exec/s: 0 rss: 69Mb L: 12/42 MS: 1 ShuffleBytes- 00:08:07.734 [2024-11-09 17:17:27.246803] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:36752 00:08:07.734 [2024-11-09 17:17:27.246831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.734 [2024-11-09 17:17:27.246864] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 00:08:07.734 [2024-11-09 17:17:27.246879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.734 [2024-11-09 17:17:27.246931] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36710 00:08:07.734 [2024-11-09 17:17:27.246946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.734 #32 NEW cov: 11833 ft: 13962 corp: 18/520b lim: 50 exec/s: 0 rss: 69Mb L: 38/42 MS: 1 ShuffleBytes- 00:08:07.734 [2024-11-09 17:17:27.286923] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:36752 00:08:07.734 [2024-11-09 17:17:27.286952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.734 [2024-11-09 17:17:27.286987] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10321000817801269135 len:36752 00:08:07.734 [2024-11-09 17:17:27.287002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.734 [2024-11-09 17:17:27.287050] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 00:08:07.734 [2024-11-09 17:17:27.287065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.734 #33 NEW cov: 11833 ft: 14013 corp: 19/558b lim: 50 exec/s: 0 rss: 69Mb L: 38/42 MS: 1 ChangeBit- 00:08:07.734 [2024-11-09 17:17:27.326957] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:36752 00:08:07.734 [2024-11-09 17:17:27.326984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.734 [2024-11-09 17:17:27.327035] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10321000817801269135 len:36752 00:08:07.734 [2024-11-09 17:17:27.327049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.734 #34 NEW cov: 11833 ft: 14253 corp: 20/580b lim: 50 exec/s: 34 rss: 69Mb L: 22/42 MS: 1 EraseBytes- 00:08:07.735 [2024-11-09 17:17:27.366927] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744070136004607 len:65536 00:08:07.735 [2024-11-09 17:17:27.366953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.735 #35 NEW cov: 11833 ft: 14274 corp: 21/592b lim: 50 exec/s: 35 rss: 69Mb L: 12/42 MS: 1 ShuffleBytes- 00:08:07.735 [2024-11-09 17:17:27.407242] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:36752 00:08:07.735 [2024-11-09 17:17:27.407270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.735 [2024-11-09 17:17:27.407316] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10321000817801269135 len:36752 00:08:07.735 [2024-11-09 17:17:27.407334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.735 [2024-11-09 17:17:27.407385] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10344642997858045839 len:65536 00:08:07.735 [2024-11-09 17:17:27.407399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.735 #36 NEW cov: 11833 ft: 14280 corp: 22/626b lim: 50 exec/s: 36 rss: 69Mb L: 34/42 MS: 1 EraseBytes- 00:08:07.735 [2024-11-09 17:17:27.447161] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744070133907455 len:65536 00:08:07.735 [2024-11-09 17:17:27.447188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.735 #37 NEW cov: 11833 ft: 14352 corp: 23/638b lim: 50 exec/s: 37 rss: 69Mb L: 12/42 MS: 1 ChangeBit- 00:08:07.735 [2024-11-09 17:17:27.487516] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:36752 00:08:07.735 [2024-11-09 17:17:27.487548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.735 [2024-11-09 17:17:27.487583] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 00:08:07.735 [2024-11-09 17:17:27.487597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.735 [2024-11-09 17:17:27.487648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36758 00:08:07.735 [2024-11-09 17:17:27.487662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.993 #38 NEW cov: 11833 ft: 14369 corp: 24/676b lim: 50 exec/s: 38 rss: 69Mb L: 38/42 MS: 1 ChangeByte- 00:08:07.994 [2024-11-09 17:17:27.527728] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:36752 00:08:07.994 [2024-11-09 17:17:27.527754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.994 [2024-11-09 17:17:27.527799] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10304235949832245135 len:2192 00:08:07.994 [2024-11-09 17:17:27.527814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.994 [2024-11-09 17:17:27.527867] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 00:08:07.994 [2024-11-09 17:17:27.527881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.994 [2024-11-09 17:17:27.527931] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744071823134607 len:65536 00:08:07.994 [2024-11-09 17:17:27.527945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:07.994 #39 NEW cov: 11833 ft: 14379 corp: 25/717b lim: 50 exec/s: 39 rss: 69Mb L: 41/42 MS: 1 PersAutoDict- DE: "\000\000\000\010"- 00:08:07.994 [2024-11-09 17:17:27.567666] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:36752 00:08:07.994 [2024-11-09 17:17:27.567692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.994 [2024-11-09 17:17:27.567727] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 00:08:07.994 [2024-11-09 17:17:27.567744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.994 [2024-11-09 17:17:27.567795] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10344503978356608911 len:36752 00:08:07.994 [2024-11-09 17:17:27.567810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.994 #40 NEW cov: 11833 ft: 14403 corp: 26/754b lim: 50 exec/s: 40 rss: 69Mb L: 37/42 MS: 1 ChangeBit- 00:08:07.994 [2024-11-09 17:17:27.607884] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:36752 00:08:07.994 [2024-11-09 17:17:27.607912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.994 [2024-11-09 17:17:27.607947] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10321000817801269135 len:36752 00:08:07.994 [2024-11-09 17:17:27.607961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.994 [2024-11-09 17:17:27.608012] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10344642997858045839 len:65536 00:08:07.994 [2024-11-09 17:17:27.608028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.994 #41 NEW cov: 11833 ft: 14421 corp: 27/788b lim: 50 exec/s: 41 rss: 70Mb L: 34/42 MS: 1 ShuffleBytes- 00:08:07.994 [2024-11-09 17:17:27.648073] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:36752 00:08:07.994 [2024-11-09 17:17:27.648099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.994 [2024-11-09 17:17:27.648136] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10321000817801269135 len:36752 00:08:07.994 [2024-11-09 17:17:27.648151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.994 [2024-11-09 17:17:27.648203] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 00:08:07.994 [2024-11-09 17:17:27.648217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.994 [2024-11-09 17:17:27.648267] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:10232178351499349903 len:65536 00:08:07.994 [2024-11-09 17:17:27.648281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:07.994 #42 NEW cov: 11833 ft: 14441 corp: 28/830b lim: 50 exec/s: 42 rss: 70Mb L: 42/42 MS: 1 CrossOver- 00:08:07.994 [2024-11-09 17:17:27.688170] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:36752 00:08:07.994 [2024-11-09 17:17:27.688196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.994 [2024-11-09 17:17:27.688251] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10321000817801269135 len:36752 00:08:07.994 [2024-11-09 17:17:27.688267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.994 [2024-11-09 17:17:27.688316] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:637534208 len:36752 00:08:07.994 [2024-11-09 17:17:27.688331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.994 [2024-11-09 17:17:27.688381] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446743775477759999 len:56284 00:08:07.994 [2024-11-09 17:17:27.688399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:07.994 #43 NEW cov: 11833 ft: 14463 corp: 29/875b lim: 50 exec/s: 43 rss: 70Mb L: 45/45 MS: 1 InsertRepeatedBytes- 00:08:07.994 [2024-11-09 17:17:27.728207] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:36752 00:08:07.994 [2024-11-09 17:17:27.728234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.994 [2024-11-09 17:17:27.728267] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 00:08:07.994 [2024-11-09 17:17:27.728281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.994 [2024-11-09 17:17:27.728332] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 00:08:07.994 [2024-11-09 17:17:27.728347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.994 #44 NEW cov: 11833 ft: 14466 corp: 30/912b lim: 50 exec/s: 44 rss: 70Mb L: 37/45 MS: 1 ShuffleBytes- 00:08:08.254 [2024-11-09 17:17:27.768311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:36752 00:08:08.254 [2024-11-09 17:17:27.768338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.254 [2024-11-09 17:17:27.768374] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:15248 00:08:08.254 [2024-11-09 17:17:27.768388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.254 [2024-11-09 17:17:27.768438] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 00:08:08.254 [2024-11-09 17:17:27.768453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.254 #45 NEW cov: 11833 ft: 14477 corp: 31/950b lim: 50 exec/s: 45 rss: 70Mb L: 38/45 MS: 1 CrossOver- 00:08:08.254 [2024-11-09 17:17:27.808205] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744070136004607 len:63488 00:08:08.254 [2024-11-09 17:17:27.808232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.254 #46 NEW cov: 11833 ft: 14503 corp: 32/962b lim: 50 exec/s: 46 rss: 70Mb L: 12/45 MS: 1 ChangeBit- 00:08:08.254 [2024-11-09 17:17:27.848347] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16140901060922310655 len:63488 00:08:08.254 [2024-11-09 17:17:27.848374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.254 #47 NEW cov: 11833 ft: 14522 corp: 33/974b lim: 50 exec/s: 47 rss: 70Mb L: 12/45 MS: 1 ChangeBit- 00:08:08.254 [2024-11-09 17:17:27.888660] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:157845047738368 len:36752 00:08:08.254 [2024-11-09 17:17:27.888686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.254 [2024-11-09 17:17:27.888720] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10321000817801269135 len:36752 00:08:08.254 [2024-11-09 17:17:27.888735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.254 [2024-11-09 17:17:27.888785] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10344642997858045839 len:65536 00:08:08.254 [2024-11-09 17:17:27.888804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.254 #48 NEW cov: 11833 ft: 14543 corp: 34/1008b lim: 50 exec/s: 48 rss: 70Mb L: 34/45 MS: 1 CrossOver- 00:08:08.254 [2024-11-09 17:17:27.928896] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2251812866359296 len:772 00:08:08.254 [2024-11-09 17:17:27.928922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.254 [2024-11-09 17:17:27.928964] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:217020518514230019 len:772 00:08:08.254 [2024-11-09 17:17:27.928980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.254 [2024-11-09 17:17:27.929032] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:217020518514230019 len:772 00:08:08.254 [2024-11-09 17:17:27.929047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.254 [2024-11-09 17:17:27.929097] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:217020518514230019 len:772 00:08:08.254 [2024-11-09 17:17:27.929112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:08.254 #52 NEW cov: 11833 ft: 14553 corp: 35/1057b lim: 50 exec/s: 52 rss: 70Mb L: 49/49 MS: 4 ShuffleBytes-PersAutoDict-CopyPart-InsertRepeatedBytes- DE: "\000\000\000\010"- 00:08:08.254 [2024-11-09 17:17:27.968996] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:1 00:08:08.254 [2024-11-09 17:17:27.969022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.254 [2024-11-09 17:17:27.969063] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10344644713436974991 len:36668 00:08:08.254 [2024-11-09 17:17:27.969078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.254 [2024-11-09 17:17:27.969130] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10344644715849027471 len:36752 00:08:08.254 [2024-11-09 17:17:27.969144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.254 [2024-11-09 17:17:27.969195] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:10232178351499349903 len:65536 00:08:08.254 [2024-11-09 17:17:27.969209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:08.254 #53 NEW cov: 11833 ft: 14564 corp: 36/1099b lim: 50 exec/s: 53 rss: 70Mb L: 42/49 MS: 1 ChangeByte- 00:08:08.254 [2024-11-09 17:17:28.009002] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:36752 00:08:08.254 [2024-11-09 17:17:28.009028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.254 [2024-11-09 17:17:28.009078] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 00:08:08.254 [2024-11-09 17:17:28.009092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.254 [2024-11-09 17:17:28.009145] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10344503978356608911 len:65536 00:08:08.254 [2024-11-09 17:17:28.009160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.513 #54 NEW cov: 11833 ft: 14617 corp: 37/1136b lim: 50 exec/s: 54 rss: 70Mb L: 37/49 MS: 1 ShuffleBytes- 00:08:08.513 [2024-11-09 17:17:28.049135] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:36752 00:08:08.513 [2024-11-09 17:17:28.049162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.513 [2024-11-09 17:17:28.049198] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10321000817734160271 len:36752 00:08:08.513 [2024-11-09 17:17:28.049213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.513 [2024-11-09 17:17:28.049263] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 00:08:08.513 [2024-11-09 17:17:28.049278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.513 #55 NEW cov: 11833 ft: 14636 corp: 38/1174b lim: 50 exec/s: 55 rss: 70Mb L: 38/49 MS: 1 ChangeBit- 00:08:08.513 [2024-11-09 17:17:28.089089] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8370784329151050868 len:65536 00:08:08.513 [2024-11-09 17:17:28.089116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.513 #56 NEW cov: 11833 ft: 14643 corp: 39/1191b lim: 50 exec/s: 56 rss: 70Mb L: 17/49 MS: 1 InsertRepeatedBytes- 00:08:08.513 [2024-11-09 17:17:28.129476] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:36752 00:08:08.513 [2024-11-09 17:17:28.129503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.513 [2024-11-09 17:17:28.129543] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10321000817801267087 len:36752 00:08:08.513 [2024-11-09 17:17:28.129558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.513 [2024-11-09 17:17:28.129610] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 00:08:08.513 [2024-11-09 17:17:28.129625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.513 [2024-11-09 17:17:28.129676] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:10232178351499349903 len:65536 00:08:08.513 [2024-11-09 17:17:28.129690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:08.513 #57 NEW cov: 11833 ft: 14659 corp: 40/1233b lim: 50 exec/s: 57 rss: 70Mb L: 42/49 MS: 1 ChangeBit- 00:08:08.514 [2024-11-09 17:17:28.169363] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:36752 00:08:08.514 [2024-11-09 17:17:28.169390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.514 [2024-11-09 17:17:28.169428] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36864 00:08:08.514 [2024-11-09 17:17:28.169443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.514 #58 NEW cov: 11833 ft: 14677 corp: 41/1259b lim: 50 exec/s: 58 rss: 70Mb L: 26/49 MS: 1 EraseBytes- 00:08:08.514 [2024-11-09 17:17:28.209580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:36752 00:08:08.514 [2024-11-09 17:17:28.209606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.514 [2024-11-09 17:17:28.209647] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36750 00:08:08.514 [2024-11-09 17:17:28.209662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.514 [2024-11-09 17:17:28.209711] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10336166248539197327 len:28827 00:08:08.514 [2024-11-09 17:17:28.209725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.514 #59 NEW cov: 11833 ft: 14708 corp: 42/1297b lim: 50 exec/s: 59 rss: 70Mb L: 38/49 MS: 1 ChangeBinInt- 00:08:08.514 [2024-11-09 17:17:28.249691] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:36752 00:08:08.514 [2024-11-09 17:17:28.249718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.514 [2024-11-09 17:17:28.249754] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10304517424808955791 len:1 00:08:08.514 [2024-11-09 17:17:28.249767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.514 [2024-11-09 17:17:28.249817] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10344642995449495695 len:65536 00:08:08.514 [2024-11-09 17:17:28.249832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.514 #60 NEW cov: 11833 ft: 14719 corp: 43/1331b lim: 50 exec/s: 60 rss: 70Mb L: 34/49 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:08:08.773 [2024-11-09 17:17:28.289614] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744070133907212 len:65536 00:08:08.773 [2024-11-09 17:17:28.289640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.773 #61 NEW cov: 11833 ft: 14734 corp: 44/1343b lim: 50 exec/s: 61 rss: 70Mb L: 12/49 MS: 1 ChangeBinInt- 00:08:08.773 [2024-11-09 17:17:28.330048] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:1 00:08:08.773 [2024-11-09 17:17:28.330073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.773 [2024-11-09 17:17:28.330128] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10344644713436974991 len:36668 00:08:08.773 [2024-11-09 17:17:28.330142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.773 [2024-11-09 17:17:28.330193] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 00:08:08.773 [2024-11-09 17:17:28.330208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.773 [2024-11-09 17:17:28.330258] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:12103422112141199 len:65536 00:08:08.773 [2024-11-09 17:17:28.330272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:08.773 [2024-11-09 17:17:28.360125] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18415095246405763071 len:1 00:08:08.773 [2024-11-09 17:17:28.360151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.773 [2024-11-09 17:17:28.360189] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10344644715835558031 len:36752 00:08:08.773 [2024-11-09 17:17:28.360207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.773 [2024-11-09 17:17:28.360256] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10344644714435678095 len:36752 00:08:08.773 [2024-11-09 17:17:28.360270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.773 [2024-11-09 17:17:28.360321] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:10304283224537272207 len:65536 00:08:08.773 [2024-11-09 17:17:28.360336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:08.773 #63 NEW cov: 11833 ft: 14750 corp: 45/1386b lim: 50 exec/s: 31 rss: 70Mb L: 43/49 MS: 2 ChangeBinInt-CopyPart- 00:08:08.773 #63 DONE cov: 11833 ft: 14750 corp: 45/1386b lim: 50 exec/s: 31 rss: 70Mb 00:08:08.773 ###### Recommended dictionary. ###### 00:08:08.773 "\000\000\000\010" # Uses: 2 00:08:08.773 "\001\000\000\000\000\000\000\000" # Uses: 0 00:08:08.773 ###### End of recommended dictionary. ###### 00:08:08.773 Done 63 runs in 2 second(s) 00:08:08.773 17:17:28 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_19.conf 00:08:08.773 17:17:28 -- ../common.sh@72 -- # (( i++ )) 00:08:08.773 17:17:28 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:08.773 17:17:28 -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:08:08.773 17:17:28 -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:08:08.773 17:17:28 -- nvmf/run.sh@24 -- # local timen=1 00:08:08.773 17:17:28 -- nvmf/run.sh@25 -- # local core=0x1 00:08:08.773 17:17:28 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:08.773 17:17:28 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:08:08.773 17:17:28 -- nvmf/run.sh@29 -- # printf %02d 20 00:08:08.773 17:17:28 -- nvmf/run.sh@29 -- # port=4420 00:08:08.773 17:17:28 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:08.773 17:17:28 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:08:08.773 17:17:28 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:08.774 17:17:28 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 -r /var/tmp/spdk20.sock 00:08:09.033 [2024-11-09 17:17:28.545629] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:09.033 [2024-11-09 17:17:28.545713] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid374664 ] 00:08:09.033 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.033 [2024-11-09 17:17:28.726939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.033 [2024-11-09 17:17:28.790467] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:09.033 [2024-11-09 17:17:28.790605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.293 [2024-11-09 17:17:28.849055] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.293 [2024-11-09 17:17:28.865431] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:09.293 INFO: Running with entropic power schedule (0xFF, 100). 00:08:09.293 INFO: Seed: 3971349153 00:08:09.293 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:09.293 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:09.293 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:09.293 INFO: A corpus is not provided, starting from an empty corpus 00:08:09.293 #2 INITED exec/s: 0 rss: 61Mb 00:08:09.293 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:09.293 This may also happen if the target rejected all inputs we tried so far 00:08:09.293 [2024-11-09 17:17:28.920814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.293 [2024-11-09 17:17:28.920845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.293 [2024-11-09 17:17:28.920883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.293 [2024-11-09 17:17:28.920897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.293 [2024-11-09 17:17:28.920954] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.293 [2024-11-09 17:17:28.920969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.552 NEW_FUNC[1/672]: 0x45c9d8 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:08:09.552 NEW_FUNC[2/672]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:09.552 #3 NEW cov: 11657 ft: 11658 corp: 2/72b lim: 90 exec/s: 0 rss: 68Mb L: 71/71 MS: 1 InsertRepeatedBytes- 00:08:09.552 [2024-11-09 17:17:29.221588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.552 [2024-11-09 17:17:29.221621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.552 [2024-11-09 17:17:29.221675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.552 [2024-11-09 17:17:29.221692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.552 [2024-11-09 17:17:29.221748] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.552 [2024-11-09 17:17:29.221763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.552 #9 NEW cov: 11777 ft: 12083 corp: 3/143b lim: 90 exec/s: 0 rss: 68Mb L: 71/71 MS: 1 ChangeBinInt- 00:08:09.553 [2024-11-09 17:17:29.271488] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.553 [2024-11-09 17:17:29.271516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.553 [2024-11-09 17:17:29.271566] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.553 [2024-11-09 17:17:29.271582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.553 #11 NEW cov: 11783 ft: 12803 corp: 4/179b lim: 90 exec/s: 0 rss: 68Mb L: 36/71 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:09.553 [2024-11-09 17:17:29.311787] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.553 [2024-11-09 17:17:29.311814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.553 [2024-11-09 17:17:29.311852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.553 [2024-11-09 17:17:29.311869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.553 [2024-11-09 17:17:29.311927] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.553 [2024-11-09 17:17:29.311942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.812 #12 NEW cov: 11868 ft: 13076 corp: 5/238b lim: 90 exec/s: 0 rss: 68Mb L: 59/71 MS: 1 EraseBytes- 00:08:09.812 [2024-11-09 17:17:29.351893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.812 [2024-11-09 17:17:29.351920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.812 [2024-11-09 17:17:29.351958] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.812 [2024-11-09 17:17:29.351974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.812 [2024-11-09 17:17:29.352029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.812 [2024-11-09 17:17:29.352044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.812 #17 NEW cov: 11868 ft: 13202 corp: 6/299b lim: 90 exec/s: 0 rss: 68Mb L: 61/71 MS: 5 ChangeBit-ChangeBit-InsertRepeatedBytes-ShuffleBytes-CrossOver- 00:08:09.812 [2024-11-09 17:17:29.392007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.812 [2024-11-09 17:17:29.392035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.812 [2024-11-09 17:17:29.392075] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.812 [2024-11-09 17:17:29.392090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.812 [2024-11-09 17:17:29.392146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.812 [2024-11-09 17:17:29.392161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.812 #18 NEW cov: 11868 ft: 13255 corp: 7/360b lim: 90 exec/s: 0 rss: 68Mb L: 61/71 MS: 1 CMP- DE: "\017\000"- 00:08:09.812 [2024-11-09 17:17:29.432125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.812 [2024-11-09 17:17:29.432153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.812 [2024-11-09 17:17:29.432191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.812 [2024-11-09 17:17:29.432207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.812 [2024-11-09 17:17:29.432264] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.812 [2024-11-09 17:17:29.432280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.812 #19 NEW cov: 11868 ft: 13300 corp: 8/421b lim: 90 exec/s: 0 rss: 69Mb L: 61/71 MS: 1 ShuffleBytes- 00:08:09.812 [2024-11-09 17:17:29.472251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.812 [2024-11-09 17:17:29.472278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.812 [2024-11-09 17:17:29.472320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.812 [2024-11-09 17:17:29.472336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.812 [2024-11-09 17:17:29.472395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.812 [2024-11-09 17:17:29.472427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.812 #20 NEW cov: 11868 ft: 13334 corp: 9/480b lim: 90 exec/s: 0 rss: 69Mb L: 59/71 MS: 1 ChangeBinInt- 00:08:09.812 [2024-11-09 17:17:29.512377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.812 [2024-11-09 17:17:29.512405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.812 [2024-11-09 17:17:29.512443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.812 [2024-11-09 17:17:29.512459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.812 [2024-11-09 17:17:29.512514] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.812 [2024-11-09 17:17:29.512535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.812 #21 NEW cov: 11868 ft: 13352 corp: 10/539b lim: 90 exec/s: 0 rss: 69Mb L: 59/71 MS: 1 CrossOver- 00:08:09.812 [2024-11-09 17:17:29.552534] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.812 [2024-11-09 17:17:29.552562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.812 [2024-11-09 17:17:29.552604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.812 [2024-11-09 17:17:29.552620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.812 [2024-11-09 17:17:29.552678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.812 [2024-11-09 17:17:29.552693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.812 #22 NEW cov: 11868 ft: 13383 corp: 11/598b lim: 90 exec/s: 0 rss: 69Mb L: 59/71 MS: 1 ShuffleBytes- 00:08:10.074 [2024-11-09 17:17:29.592634] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.074 [2024-11-09 17:17:29.592661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.074 [2024-11-09 17:17:29.592700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.074 [2024-11-09 17:17:29.592716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.074 [2024-11-09 17:17:29.592772] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.074 [2024-11-09 17:17:29.592788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.074 #23 NEW cov: 11868 ft: 13482 corp: 12/659b lim: 90 exec/s: 0 rss: 69Mb L: 61/71 MS: 1 ShuffleBytes- 00:08:10.074 [2024-11-09 17:17:29.632754] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.074 [2024-11-09 17:17:29.632781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.074 [2024-11-09 17:17:29.632819] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.074 [2024-11-09 17:17:29.632835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.074 [2024-11-09 17:17:29.632890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.074 [2024-11-09 17:17:29.632905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.074 #24 NEW cov: 11868 ft: 13611 corp: 13/718b lim: 90 exec/s: 0 rss: 69Mb L: 59/71 MS: 1 PersAutoDict- DE: "\017\000"- 00:08:10.074 [2024-11-09 17:17:29.673015] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.074 [2024-11-09 17:17:29.673046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.074 [2024-11-09 17:17:29.673086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.074 [2024-11-09 17:17:29.673101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.074 [2024-11-09 17:17:29.673152] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.074 [2024-11-09 17:17:29.673168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.074 [2024-11-09 17:17:29.673225] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:10.074 [2024-11-09 17:17:29.673240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.074 #25 NEW cov: 11868 ft: 13991 corp: 14/807b lim: 90 exec/s: 0 rss: 69Mb L: 89/89 MS: 1 InsertRepeatedBytes- 00:08:10.074 [2024-11-09 17:17:29.723016] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.074 [2024-11-09 17:17:29.723043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.074 [2024-11-09 17:17:29.723083] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.074 [2024-11-09 17:17:29.723099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.074 [2024-11-09 17:17:29.723154] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.074 [2024-11-09 17:17:29.723185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.074 #26 NEW cov: 11868 ft: 14019 corp: 15/868b lim: 90 exec/s: 0 rss: 69Mb L: 61/89 MS: 1 ShuffleBytes- 00:08:10.074 [2024-11-09 17:17:29.763154] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.074 [2024-11-09 17:17:29.763181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.074 [2024-11-09 17:17:29.763222] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.074 [2024-11-09 17:17:29.763238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.074 [2024-11-09 17:17:29.763295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.074 [2024-11-09 17:17:29.763311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.074 #27 NEW cov: 11868 ft: 14030 corp: 16/927b lim: 90 exec/s: 0 rss: 69Mb L: 59/89 MS: 1 ChangeByte- 00:08:10.074 [2024-11-09 17:17:29.803253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.075 [2024-11-09 17:17:29.803280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.075 [2024-11-09 17:17:29.803341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.075 [2024-11-09 17:17:29.803358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.075 [2024-11-09 17:17:29.803415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.075 [2024-11-09 17:17:29.803431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.075 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:10.075 #28 NEW cov: 11891 ft: 14063 corp: 17/986b lim: 90 exec/s: 0 rss: 69Mb L: 59/89 MS: 1 ChangeBit- 00:08:10.333 [2024-11-09 17:17:29.843547] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.333 [2024-11-09 17:17:29.843576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.333 [2024-11-09 17:17:29.843629] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.333 [2024-11-09 17:17:29.843645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.333 [2024-11-09 17:17:29.843701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.333 [2024-11-09 17:17:29.843717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.333 [2024-11-09 17:17:29.843773] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:10.333 [2024-11-09 17:17:29.843789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.333 #29 NEW cov: 11891 ft: 14077 corp: 18/1075b lim: 90 exec/s: 0 rss: 69Mb L: 89/89 MS: 1 ShuffleBytes- 00:08:10.333 [2024-11-09 17:17:29.883531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.333 [2024-11-09 17:17:29.883558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.333 [2024-11-09 17:17:29.883606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.333 [2024-11-09 17:17:29.883623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.333 [2024-11-09 17:17:29.883681] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.333 [2024-11-09 17:17:29.883695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.333 #30 NEW cov: 11891 ft: 14110 corp: 19/1135b lim: 90 exec/s: 30 rss: 69Mb L: 60/89 MS: 1 InsertByte- 00:08:10.333 [2024-11-09 17:17:29.923634] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.333 [2024-11-09 17:17:29.923662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.333 [2024-11-09 17:17:29.923699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.334 [2024-11-09 17:17:29.923715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.334 [2024-11-09 17:17:29.923771] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.334 [2024-11-09 17:17:29.923787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.334 #31 NEW cov: 11891 ft: 14131 corp: 20/1194b lim: 90 exec/s: 31 rss: 69Mb L: 59/89 MS: 1 ChangeBit- 00:08:10.334 [2024-11-09 17:17:29.963748] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.334 [2024-11-09 17:17:29.963775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.334 [2024-11-09 17:17:29.963827] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.334 [2024-11-09 17:17:29.963843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.334 [2024-11-09 17:17:29.963901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.334 [2024-11-09 17:17:29.963919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.334 #32 NEW cov: 11891 ft: 14141 corp: 21/1255b lim: 90 exec/s: 32 rss: 70Mb L: 61/89 MS: 1 ShuffleBytes- 00:08:10.334 [2024-11-09 17:17:30.003880] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.334 [2024-11-09 17:17:30.003910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.334 [2024-11-09 17:17:30.003964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.334 [2024-11-09 17:17:30.003980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.334 [2024-11-09 17:17:30.004038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.334 [2024-11-09 17:17:30.004055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.334 #33 NEW cov: 11891 ft: 14149 corp: 22/1316b lim: 90 exec/s: 33 rss: 70Mb L: 61/89 MS: 1 CrossOver- 00:08:10.334 [2024-11-09 17:17:30.043991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.334 [2024-11-09 17:17:30.044023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.334 [2024-11-09 17:17:30.044058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.334 [2024-11-09 17:17:30.044073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.334 [2024-11-09 17:17:30.044129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.334 [2024-11-09 17:17:30.044146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.334 #34 NEW cov: 11891 ft: 14175 corp: 23/1375b lim: 90 exec/s: 34 rss: 70Mb L: 59/89 MS: 1 ChangeByte- 00:08:10.334 [2024-11-09 17:17:30.084075] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.334 [2024-11-09 17:17:30.084106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.334 [2024-11-09 17:17:30.084145] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.334 [2024-11-09 17:17:30.084161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.334 [2024-11-09 17:17:30.084217] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.334 [2024-11-09 17:17:30.084233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.593 #35 NEW cov: 11891 ft: 14218 corp: 24/1446b lim: 90 exec/s: 35 rss: 70Mb L: 71/89 MS: 1 CMP- DE: "\011\000"- 00:08:10.593 [2024-11-09 17:17:30.124236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.593 [2024-11-09 17:17:30.124266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.593 [2024-11-09 17:17:30.124319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.593 [2024-11-09 17:17:30.124335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.593 [2024-11-09 17:17:30.124392] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.593 [2024-11-09 17:17:30.124407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.593 #36 NEW cov: 11891 ft: 14251 corp: 25/1507b lim: 90 exec/s: 36 rss: 70Mb L: 61/89 MS: 1 CopyPart- 00:08:10.594 [2024-11-09 17:17:30.164278] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.594 [2024-11-09 17:17:30.164305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.594 [2024-11-09 17:17:30.164359] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.594 [2024-11-09 17:17:30.164375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.594 [2024-11-09 17:17:30.164434] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.594 [2024-11-09 17:17:30.164450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.594 #37 NEW cov: 11891 ft: 14268 corp: 26/1567b lim: 90 exec/s: 37 rss: 70Mb L: 60/89 MS: 1 InsertByte- 00:08:10.594 [2024-11-09 17:17:30.204573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.594 [2024-11-09 17:17:30.204601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.594 [2024-11-09 17:17:30.204643] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.594 [2024-11-09 17:17:30.204659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.594 [2024-11-09 17:17:30.204714] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.594 [2024-11-09 17:17:30.204730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.594 [2024-11-09 17:17:30.204787] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:10.594 [2024-11-09 17:17:30.204802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.594 #38 NEW cov: 11891 ft: 14298 corp: 27/1656b lim: 90 exec/s: 38 rss: 70Mb L: 89/89 MS: 1 InsertRepeatedBytes- 00:08:10.594 [2024-11-09 17:17:30.244501] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.594 [2024-11-09 17:17:30.244534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.594 [2024-11-09 17:17:30.244570] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.594 [2024-11-09 17:17:30.244586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.594 [2024-11-09 17:17:30.244644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.594 [2024-11-09 17:17:30.244661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.594 #39 NEW cov: 11891 ft: 14325 corp: 28/1716b lim: 90 exec/s: 39 rss: 70Mb L: 60/89 MS: 1 CrossOver- 00:08:10.594 [2024-11-09 17:17:30.284653] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.594 [2024-11-09 17:17:30.284681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.594 [2024-11-09 17:17:30.284736] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.594 [2024-11-09 17:17:30.284753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.594 [2024-11-09 17:17:30.284812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.594 [2024-11-09 17:17:30.284829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.594 #45 NEW cov: 11891 ft: 14338 corp: 29/1775b lim: 90 exec/s: 45 rss: 70Mb L: 59/89 MS: 1 ChangeByte- 00:08:10.594 [2024-11-09 17:17:30.324940] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.594 [2024-11-09 17:17:30.324968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.594 [2024-11-09 17:17:30.325019] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.594 [2024-11-09 17:17:30.325035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.594 [2024-11-09 17:17:30.325090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.594 [2024-11-09 17:17:30.325105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.594 [2024-11-09 17:17:30.325162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:10.594 [2024-11-09 17:17:30.325178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.594 #46 NEW cov: 11891 ft: 14355 corp: 30/1847b lim: 90 exec/s: 46 rss: 70Mb L: 72/89 MS: 1 CopyPart- 00:08:10.854 [2024-11-09 17:17:30.364841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.854 [2024-11-09 17:17:30.364870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.854 [2024-11-09 17:17:30.364910] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.854 [2024-11-09 17:17:30.364926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.854 [2024-11-09 17:17:30.364984] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.854 [2024-11-09 17:17:30.365001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.854 #47 NEW cov: 11891 ft: 14411 corp: 31/1908b lim: 90 exec/s: 47 rss: 70Mb L: 61/89 MS: 1 ChangeBit- 00:08:10.854 [2024-11-09 17:17:30.404981] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.854 [2024-11-09 17:17:30.405010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.854 [2024-11-09 17:17:30.405048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.854 [2024-11-09 17:17:30.405065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.854 [2024-11-09 17:17:30.405124] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.854 [2024-11-09 17:17:30.405140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.854 #48 NEW cov: 11891 ft: 14419 corp: 32/1970b lim: 90 exec/s: 48 rss: 70Mb L: 62/89 MS: 1 InsertByte- 00:08:10.854 [2024-11-09 17:17:30.445287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.854 [2024-11-09 17:17:30.445315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.854 [2024-11-09 17:17:30.445365] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.854 [2024-11-09 17:17:30.445382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.854 [2024-11-09 17:17:30.445441] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.854 [2024-11-09 17:17:30.445456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.854 [2024-11-09 17:17:30.445514] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:10.854 [2024-11-09 17:17:30.445535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.854 #49 NEW cov: 11891 ft: 14425 corp: 33/2059b lim: 90 exec/s: 49 rss: 70Mb L: 89/89 MS: 1 ChangeBit- 00:08:10.854 [2024-11-09 17:17:30.495288] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.854 [2024-11-09 17:17:30.495316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.854 [2024-11-09 17:17:30.495355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.854 [2024-11-09 17:17:30.495371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.854 [2024-11-09 17:17:30.495428] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.854 [2024-11-09 17:17:30.495443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.854 #50 NEW cov: 11891 ft: 14430 corp: 34/2118b lim: 90 exec/s: 50 rss: 70Mb L: 59/89 MS: 1 ChangeBinInt- 00:08:10.854 [2024-11-09 17:17:30.535390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.854 [2024-11-09 17:17:30.535418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.854 [2024-11-09 17:17:30.535457] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.854 [2024-11-09 17:17:30.535473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.854 [2024-11-09 17:17:30.535533] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.854 [2024-11-09 17:17:30.535551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.854 #51 NEW cov: 11891 ft: 14442 corp: 35/2187b lim: 90 exec/s: 51 rss: 70Mb L: 69/89 MS: 1 InsertRepeatedBytes- 00:08:10.854 [2024-11-09 17:17:30.575417] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.854 [2024-11-09 17:17:30.575445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.854 [2024-11-09 17:17:30.575484] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.854 [2024-11-09 17:17:30.575500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.854 [2024-11-09 17:17:30.575561] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.854 [2024-11-09 17:17:30.575576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.854 #52 NEW cov: 11891 ft: 14522 corp: 36/2248b lim: 90 exec/s: 52 rss: 70Mb L: 61/89 MS: 1 ChangeBinInt- 00:08:10.854 [2024-11-09 17:17:30.615579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.854 [2024-11-09 17:17:30.615606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.854 [2024-11-09 17:17:30.615652] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.855 [2024-11-09 17:17:30.615669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.855 [2024-11-09 17:17:30.615728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.855 [2024-11-09 17:17:30.615745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.114 #58 NEW cov: 11891 ft: 14575 corp: 37/2307b lim: 90 exec/s: 58 rss: 70Mb L: 59/89 MS: 1 ChangeByte- 00:08:11.114 [2024-11-09 17:17:30.655700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.115 [2024-11-09 17:17:30.655728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.115 [2024-11-09 17:17:30.655788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:11.115 [2024-11-09 17:17:30.655803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.115 [2024-11-09 17:17:30.655862] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:11.115 [2024-11-09 17:17:30.655877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.115 #59 NEW cov: 11891 ft: 14648 corp: 38/2366b lim: 90 exec/s: 59 rss: 70Mb L: 59/89 MS: 1 ChangeBit- 00:08:11.115 [2024-11-09 17:17:30.695876] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.115 [2024-11-09 17:17:30.695903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.115 [2024-11-09 17:17:30.695943] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:11.115 [2024-11-09 17:17:30.695958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.115 [2024-11-09 17:17:30.696017] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:11.115 [2024-11-09 17:17:30.696033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.115 #60 NEW cov: 11891 ft: 14656 corp: 39/2432b lim: 90 exec/s: 60 rss: 70Mb L: 66/89 MS: 1 InsertRepeatedBytes- 00:08:11.115 [2024-11-09 17:17:30.735974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.115 [2024-11-09 17:17:30.736001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.115 [2024-11-09 17:17:30.736041] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:11.115 [2024-11-09 17:17:30.736057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.115 [2024-11-09 17:17:30.736114] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:11.115 [2024-11-09 17:17:30.736129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.115 #61 NEW cov: 11891 ft: 14660 corp: 40/2493b lim: 90 exec/s: 61 rss: 70Mb L: 61/89 MS: 1 ChangeBit- 00:08:11.115 [2024-11-09 17:17:30.775768] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.115 [2024-11-09 17:17:30.775795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.115 #63 NEW cov: 11891 ft: 15466 corp: 41/2524b lim: 90 exec/s: 63 rss: 70Mb L: 31/89 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:08:11.115 [2024-11-09 17:17:30.816192] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.115 [2024-11-09 17:17:30.816219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.115 [2024-11-09 17:17:30.816259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:11.115 [2024-11-09 17:17:30.816274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.115 [2024-11-09 17:17:30.816331] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:11.115 [2024-11-09 17:17:30.816362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.115 #64 NEW cov: 11891 ft: 15486 corp: 42/2583b lim: 90 exec/s: 64 rss: 70Mb L: 59/89 MS: 1 CopyPart- 00:08:11.115 [2024-11-09 17:17:30.856293] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.115 [2024-11-09 17:17:30.856321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.115 [2024-11-09 17:17:30.856358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:11.115 [2024-11-09 17:17:30.856374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.115 [2024-11-09 17:17:30.856432] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:11.115 [2024-11-09 17:17:30.856449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.115 #65 NEW cov: 11891 ft: 15491 corp: 43/2642b lim: 90 exec/s: 65 rss: 70Mb L: 59/89 MS: 1 PersAutoDict- DE: "\011\000"- 00:08:11.375 [2024-11-09 17:17:30.886402] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.375 [2024-11-09 17:17:30.886430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.375 [2024-11-09 17:17:30.886468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:11.375 [2024-11-09 17:17:30.886484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.375 [2024-11-09 17:17:30.886546] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:11.375 [2024-11-09 17:17:30.886563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.375 #66 NEW cov: 11891 ft: 15501 corp: 44/2701b lim: 90 exec/s: 33 rss: 70Mb L: 59/89 MS: 1 PersAutoDict- DE: "\017\000"- 00:08:11.375 #66 DONE cov: 11891 ft: 15501 corp: 44/2701b lim: 90 exec/s: 33 rss: 70Mb 00:08:11.375 ###### Recommended dictionary. ###### 00:08:11.375 "\017\000" # Uses: 3 00:08:11.375 "\011\000" # Uses: 1 00:08:11.375 ###### End of recommended dictionary. ###### 00:08:11.375 Done 66 runs in 2 second(s) 00:08:11.375 17:17:31 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_20.conf 00:08:11.375 17:17:31 -- ../common.sh@72 -- # (( i++ )) 00:08:11.375 17:17:31 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:11.375 17:17:31 -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:08:11.375 17:17:31 -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:08:11.375 17:17:31 -- nvmf/run.sh@24 -- # local timen=1 00:08:11.375 17:17:31 -- nvmf/run.sh@25 -- # local core=0x1 00:08:11.375 17:17:31 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:11.375 17:17:31 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:08:11.375 17:17:31 -- nvmf/run.sh@29 -- # printf %02d 21 00:08:11.375 17:17:31 -- nvmf/run.sh@29 -- # port=4421 00:08:11.375 17:17:31 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:11.375 17:17:31 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:08:11.375 17:17:31 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:11.375 17:17:31 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 -r /var/tmp/spdk21.sock 00:08:11.375 [2024-11-09 17:17:31.077307] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:11.375 [2024-11-09 17:17:31.077395] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid375198 ] 00:08:11.375 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.634 [2024-11-09 17:17:31.256606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.634 [2024-11-09 17:17:31.319250] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:11.634 [2024-11-09 17:17:31.319373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.634 [2024-11-09 17:17:31.377160] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.634 [2024-11-09 17:17:31.393540] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:08:11.893 INFO: Running with entropic power schedule (0xFF, 100). 00:08:11.893 INFO: Seed: 2205395690 00:08:11.893 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:11.893 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:11.893 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:11.893 INFO: A corpus is not provided, starting from an empty corpus 00:08:11.893 #2 INITED exec/s: 0 rss: 60Mb 00:08:11.893 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:11.893 This may also happen if the target rejected all inputs we tried so far 00:08:11.893 [2024-11-09 17:17:31.448829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.893 [2024-11-09 17:17:31.448860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.154 NEW_FUNC[1/672]: 0x45fc08 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:08:12.154 NEW_FUNC[2/672]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:12.154 #31 NEW cov: 11639 ft: 11624 corp: 2/13b lim: 50 exec/s: 0 rss: 68Mb L: 12/12 MS: 4 ShuffleBytes-ChangeByte-ChangeByte-InsertRepeatedBytes- 00:08:12.154 [2024-11-09 17:17:31.749519] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.154 [2024-11-09 17:17:31.749557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.154 #32 NEW cov: 11752 ft: 12178 corp: 3/25b lim: 50 exec/s: 0 rss: 68Mb L: 12/12 MS: 1 ShuffleBytes- 00:08:12.154 [2024-11-09 17:17:31.790131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.154 [2024-11-09 17:17:31.790159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.154 [2024-11-09 17:17:31.790204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.154 [2024-11-09 17:17:31.790218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.154 [2024-11-09 17:17:31.790271] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.154 [2024-11-09 17:17:31.790286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.154 [2024-11-09 17:17:31.790342] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.154 [2024-11-09 17:17:31.790355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.154 [2024-11-09 17:17:31.790406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:12.154 [2024-11-09 17:17:31.790420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:12.154 #33 NEW cov: 11758 ft: 13259 corp: 4/75b lim: 50 exec/s: 0 rss: 69Mb L: 50/50 MS: 1 InsertRepeatedBytes- 00:08:12.154 [2024-11-09 17:17:31.829785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.154 [2024-11-09 17:17:31.829813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.154 [2024-11-09 17:17:31.829856] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.154 [2024-11-09 17:17:31.829870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.154 #44 NEW cov: 11843 ft: 13804 corp: 5/99b lim: 50 exec/s: 0 rss: 69Mb L: 24/50 MS: 1 CopyPart- 00:08:12.154 [2024-11-09 17:17:31.880263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.154 [2024-11-09 17:17:31.880291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.154 [2024-11-09 17:17:31.880339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.154 [2024-11-09 17:17:31.880354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.154 [2024-11-09 17:17:31.880406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.154 [2024-11-09 17:17:31.880420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.154 [2024-11-09 17:17:31.880471] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.154 [2024-11-09 17:17:31.880486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.154 #45 NEW cov: 11843 ft: 13957 corp: 6/147b lim: 50 exec/s: 0 rss: 69Mb L: 48/50 MS: 1 InsertRepeatedBytes- 00:08:12.154 [2024-11-09 17:17:31.920117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.154 [2024-11-09 17:17:31.920145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.154 [2024-11-09 17:17:31.920197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.154 [2024-11-09 17:17:31.920213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.413 #51 NEW cov: 11843 ft: 14092 corp: 7/172b lim: 50 exec/s: 0 rss: 69Mb L: 25/50 MS: 1 InsertByte- 00:08:12.413 [2024-11-09 17:17:31.960496] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.413 [2024-11-09 17:17:31.960529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.413 [2024-11-09 17:17:31.960590] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.413 [2024-11-09 17:17:31.960606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.413 [2024-11-09 17:17:31.960659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.413 [2024-11-09 17:17:31.960676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.413 [2024-11-09 17:17:31.960729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.413 [2024-11-09 17:17:31.960744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.413 #52 NEW cov: 11843 ft: 14239 corp: 8/220b lim: 50 exec/s: 0 rss: 69Mb L: 48/50 MS: 1 ShuffleBytes- 00:08:12.413 [2024-11-09 17:17:32.010214] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.414 [2024-11-09 17:17:32.010242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.414 #53 NEW cov: 11843 ft: 14292 corp: 9/232b lim: 50 exec/s: 0 rss: 69Mb L: 12/50 MS: 1 ChangeByte- 00:08:12.414 [2024-11-09 17:17:32.050452] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.414 [2024-11-09 17:17:32.050478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.414 [2024-11-09 17:17:32.050520] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.414 [2024-11-09 17:17:32.050541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.414 #54 NEW cov: 11843 ft: 14333 corp: 10/256b lim: 50 exec/s: 0 rss: 69Mb L: 24/50 MS: 1 ShuffleBytes- 00:08:12.414 [2024-11-09 17:17:32.090575] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.414 [2024-11-09 17:17:32.090602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.414 [2024-11-09 17:17:32.090654] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.414 [2024-11-09 17:17:32.090669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.414 #55 NEW cov: 11843 ft: 14383 corp: 11/281b lim: 50 exec/s: 0 rss: 69Mb L: 25/50 MS: 1 ChangeBit- 00:08:12.414 [2024-11-09 17:17:32.131164] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.414 [2024-11-09 17:17:32.131191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.414 [2024-11-09 17:17:32.131255] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.414 [2024-11-09 17:17:32.131271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.414 [2024-11-09 17:17:32.131324] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.414 [2024-11-09 17:17:32.131338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.414 [2024-11-09 17:17:32.131390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.414 [2024-11-09 17:17:32.131405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.414 #56 NEW cov: 11843 ft: 14395 corp: 12/324b lim: 50 exec/s: 0 rss: 69Mb L: 43/50 MS: 1 InsertRepeatedBytes- 00:08:12.414 [2024-11-09 17:17:32.171090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.414 [2024-11-09 17:17:32.171117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.414 [2024-11-09 17:17:32.171154] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.414 [2024-11-09 17:17:32.171173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.414 [2024-11-09 17:17:32.171225] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.414 [2024-11-09 17:17:32.171239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.414 [2024-11-09 17:17:32.171291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.414 [2024-11-09 17:17:32.171306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.673 #57 NEW cov: 11843 ft: 14451 corp: 13/373b lim: 50 exec/s: 0 rss: 69Mb L: 49/50 MS: 1 InsertByte- 00:08:12.673 [2024-11-09 17:17:32.211360] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.673 [2024-11-09 17:17:32.211387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.673 [2024-11-09 17:17:32.211434] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.673 [2024-11-09 17:17:32.211449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.673 [2024-11-09 17:17:32.211500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.673 [2024-11-09 17:17:32.211514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.674 [2024-11-09 17:17:32.211570] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.674 [2024-11-09 17:17:32.211585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.674 [2024-11-09 17:17:32.211635] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:12.674 [2024-11-09 17:17:32.211650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:12.674 #63 NEW cov: 11843 ft: 14480 corp: 14/423b lim: 50 exec/s: 0 rss: 69Mb L: 50/50 MS: 1 ChangeBinInt- 00:08:12.674 [2024-11-09 17:17:32.261384] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.674 [2024-11-09 17:17:32.261411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.674 [2024-11-09 17:17:32.261464] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.674 [2024-11-09 17:17:32.261481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.674 [2024-11-09 17:17:32.261533] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.674 [2024-11-09 17:17:32.261549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.674 [2024-11-09 17:17:32.261600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.674 [2024-11-09 17:17:32.261615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.674 #64 NEW cov: 11843 ft: 14511 corp: 15/472b lim: 50 exec/s: 0 rss: 69Mb L: 49/50 MS: 1 ShuffleBytes- 00:08:12.674 [2024-11-09 17:17:32.301514] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.674 [2024-11-09 17:17:32.301546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.674 [2024-11-09 17:17:32.301585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.674 [2024-11-09 17:17:32.301601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.674 [2024-11-09 17:17:32.301651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.674 [2024-11-09 17:17:32.301665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.674 [2024-11-09 17:17:32.301717] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.674 [2024-11-09 17:17:32.301731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.674 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:12.674 #65 NEW cov: 11866 ft: 14533 corp: 16/516b lim: 50 exec/s: 0 rss: 69Mb L: 44/50 MS: 1 InsertRepeatedBytes- 00:08:12.674 [2024-11-09 17:17:32.351632] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.674 [2024-11-09 17:17:32.351659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.674 [2024-11-09 17:17:32.351705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.674 [2024-11-09 17:17:32.351720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.674 [2024-11-09 17:17:32.351769] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.674 [2024-11-09 17:17:32.351784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.674 [2024-11-09 17:17:32.351836] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.674 [2024-11-09 17:17:32.351850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.674 #66 NEW cov: 11866 ft: 14592 corp: 17/564b lim: 50 exec/s: 0 rss: 69Mb L: 48/50 MS: 1 ChangeBit- 00:08:12.674 [2024-11-09 17:17:32.391293] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.674 [2024-11-09 17:17:32.391320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.674 #67 NEW cov: 11866 ft: 14665 corp: 18/576b lim: 50 exec/s: 0 rss: 69Mb L: 12/50 MS: 1 ChangeBit- 00:08:12.674 [2024-11-09 17:17:32.431880] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.674 [2024-11-09 17:17:32.431906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.674 [2024-11-09 17:17:32.431954] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.674 [2024-11-09 17:17:32.431969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.674 [2024-11-09 17:17:32.432021] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.674 [2024-11-09 17:17:32.432036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.674 [2024-11-09 17:17:32.432088] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.674 [2024-11-09 17:17:32.432103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.933 #68 NEW cov: 11866 ft: 14697 corp: 19/618b lim: 50 exec/s: 68 rss: 70Mb L: 42/50 MS: 1 EraseBytes- 00:08:12.933 [2024-11-09 17:17:32.481590] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.933 [2024-11-09 17:17:32.481620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.933 #69 NEW cov: 11866 ft: 14700 corp: 20/631b lim: 50 exec/s: 69 rss: 70Mb L: 13/50 MS: 1 InsertByte- 00:08:12.933 [2024-11-09 17:17:32.521816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.933 [2024-11-09 17:17:32.521843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.933 [2024-11-09 17:17:32.521883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.933 [2024-11-09 17:17:32.521898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.933 #70 NEW cov: 11866 ft: 14800 corp: 21/656b lim: 50 exec/s: 70 rss: 70Mb L: 25/50 MS: 1 EraseBytes- 00:08:12.933 [2024-11-09 17:17:32.561833] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.933 [2024-11-09 17:17:32.561860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.933 #71 NEW cov: 11866 ft: 14810 corp: 22/668b lim: 50 exec/s: 71 rss: 70Mb L: 12/50 MS: 1 ChangeBinInt- 00:08:12.933 [2024-11-09 17:17:32.602500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.934 [2024-11-09 17:17:32.602531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.934 [2024-11-09 17:17:32.602602] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.934 [2024-11-09 17:17:32.602617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.934 [2024-11-09 17:17:32.602669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.934 [2024-11-09 17:17:32.602684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.934 [2024-11-09 17:17:32.602735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.934 [2024-11-09 17:17:32.602750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.934 [2024-11-09 17:17:32.602803] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:12.934 [2024-11-09 17:17:32.602819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:12.934 #72 NEW cov: 11866 ft: 14826 corp: 23/718b lim: 50 exec/s: 72 rss: 70Mb L: 50/50 MS: 1 CrossOver- 00:08:12.934 [2024-11-09 17:17:32.642640] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.934 [2024-11-09 17:17:32.642667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.934 [2024-11-09 17:17:32.642713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.934 [2024-11-09 17:17:32.642728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.934 [2024-11-09 17:17:32.642779] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.934 [2024-11-09 17:17:32.642793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.934 [2024-11-09 17:17:32.642843] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.934 [2024-11-09 17:17:32.642858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.934 [2024-11-09 17:17:32.642913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:12.934 [2024-11-09 17:17:32.642927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:12.934 #73 NEW cov: 11866 ft: 14833 corp: 24/768b lim: 50 exec/s: 73 rss: 70Mb L: 50/50 MS: 1 ChangeBinInt- 00:08:12.934 [2024-11-09 17:17:32.682493] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.934 [2024-11-09 17:17:32.682519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.934 [2024-11-09 17:17:32.682569] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.934 [2024-11-09 17:17:32.682584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.934 [2024-11-09 17:17:32.682634] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.934 [2024-11-09 17:17:32.682649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.934 [2024-11-09 17:17:32.682700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.934 [2024-11-09 17:17:32.682715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.934 #74 NEW cov: 11866 ft: 14849 corp: 25/817b lim: 50 exec/s: 74 rss: 70Mb L: 49/50 MS: 1 InsertByte- 00:08:13.193 [2024-11-09 17:17:32.722772] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.193 [2024-11-09 17:17:32.722798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.193 [2024-11-09 17:17:32.722863] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:13.193 [2024-11-09 17:17:32.722878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.193 [2024-11-09 17:17:32.722929] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:13.193 [2024-11-09 17:17:32.722943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.193 [2024-11-09 17:17:32.722993] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:13.193 [2024-11-09 17:17:32.723008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.193 [2024-11-09 17:17:32.723061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:13.193 [2024-11-09 17:17:32.723077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:13.193 #75 NEW cov: 11866 ft: 14903 corp: 26/867b lim: 50 exec/s: 75 rss: 70Mb L: 50/50 MS: 1 ChangeBit- 00:08:13.193 [2024-11-09 17:17:32.762497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.193 [2024-11-09 17:17:32.762527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.193 [2024-11-09 17:17:32.762579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:13.193 [2024-11-09 17:17:32.762595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.193 #76 NEW cov: 11866 ft: 14930 corp: 27/894b lim: 50 exec/s: 76 rss: 70Mb L: 27/50 MS: 1 EraseBytes- 00:08:13.193 [2024-11-09 17:17:32.802444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.193 [2024-11-09 17:17:32.802473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.193 #77 NEW cov: 11866 ft: 14953 corp: 28/906b lim: 50 exec/s: 77 rss: 70Mb L: 12/50 MS: 1 ShuffleBytes- 00:08:13.194 [2024-11-09 17:17:32.842964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.194 [2024-11-09 17:17:32.842990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.194 [2024-11-09 17:17:32.843031] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:13.194 [2024-11-09 17:17:32.843047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.194 [2024-11-09 17:17:32.843099] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:13.194 [2024-11-09 17:17:32.843114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.194 [2024-11-09 17:17:32.843168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:13.194 [2024-11-09 17:17:32.843183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.194 #78 NEW cov: 11866 ft: 14960 corp: 29/954b lim: 50 exec/s: 78 rss: 70Mb L: 48/50 MS: 1 ChangeBit- 00:08:13.194 [2024-11-09 17:17:32.883237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.194 [2024-11-09 17:17:32.883264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.194 [2024-11-09 17:17:32.883310] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:13.194 [2024-11-09 17:17:32.883325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.194 [2024-11-09 17:17:32.883377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:13.194 [2024-11-09 17:17:32.883390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.194 [2024-11-09 17:17:32.883441] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:13.194 [2024-11-09 17:17:32.883456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.194 [2024-11-09 17:17:32.883507] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:13.194 [2024-11-09 17:17:32.883521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:13.194 #79 NEW cov: 11866 ft: 14973 corp: 30/1004b lim: 50 exec/s: 79 rss: 70Mb L: 50/50 MS: 1 CopyPart- 00:08:13.194 [2024-11-09 17:17:32.922764] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.194 [2024-11-09 17:17:32.922790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.194 #80 NEW cov: 11866 ft: 14997 corp: 31/1017b lim: 50 exec/s: 80 rss: 70Mb L: 13/50 MS: 1 InsertByte- 00:08:13.194 [2024-11-09 17:17:32.953433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.194 [2024-11-09 17:17:32.953458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.194 [2024-11-09 17:17:32.953527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:13.194 [2024-11-09 17:17:32.953542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.194 [2024-11-09 17:17:32.953595] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:13.194 [2024-11-09 17:17:32.953610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.194 [2024-11-09 17:17:32.953660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:13.194 [2024-11-09 17:17:32.953674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.194 [2024-11-09 17:17:32.953727] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:13.194 [2024-11-09 17:17:32.953741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:13.453 #81 NEW cov: 11866 ft: 15031 corp: 32/1067b lim: 50 exec/s: 81 rss: 70Mb L: 50/50 MS: 1 ChangeBit- 00:08:13.453 [2024-11-09 17:17:32.993541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.453 [2024-11-09 17:17:32.993568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.453 [2024-11-09 17:17:32.993608] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:13.453 [2024-11-09 17:17:32.993620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.453 [2024-11-09 17:17:32.993672] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:13.453 [2024-11-09 17:17:32.993688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.453 [2024-11-09 17:17:32.993738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:13.453 [2024-11-09 17:17:32.993752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.453 [2024-11-09 17:17:32.993805] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:13.453 [2024-11-09 17:17:32.993818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:13.453 #82 NEW cov: 11866 ft: 15047 corp: 33/1117b lim: 50 exec/s: 82 rss: 70Mb L: 50/50 MS: 1 CMP- DE: "\377\377\377\017"- 00:08:13.453 [2024-11-09 17:17:33.043744] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.454 [2024-11-09 17:17:33.043771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.454 [2024-11-09 17:17:33.043811] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:13.454 [2024-11-09 17:17:33.043826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.454 [2024-11-09 17:17:33.043877] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:13.454 [2024-11-09 17:17:33.043891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.454 [2024-11-09 17:17:33.043942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:13.454 [2024-11-09 17:17:33.043956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.454 [2024-11-09 17:17:33.044009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:13.454 [2024-11-09 17:17:33.044024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:13.454 #83 NEW cov: 11866 ft: 15051 corp: 34/1167b lim: 50 exec/s: 83 rss: 70Mb L: 50/50 MS: 1 ChangeBit- 00:08:13.454 [2024-11-09 17:17:33.083248] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.454 [2024-11-09 17:17:33.083275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.454 #84 NEW cov: 11866 ft: 15059 corp: 35/1186b lim: 50 exec/s: 84 rss: 70Mb L: 19/50 MS: 1 InsertRepeatedBytes- 00:08:13.454 [2024-11-09 17:17:33.123923] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.454 [2024-11-09 17:17:33.123950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.454 [2024-11-09 17:17:33.124000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:13.454 [2024-11-09 17:17:33.124015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.454 [2024-11-09 17:17:33.124068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:13.454 [2024-11-09 17:17:33.124082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.454 [2024-11-09 17:17:33.124135] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:13.454 [2024-11-09 17:17:33.124149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.454 [2024-11-09 17:17:33.124202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:13.454 [2024-11-09 17:17:33.124218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:13.454 #85 NEW cov: 11866 ft: 15072 corp: 36/1236b lim: 50 exec/s: 85 rss: 70Mb L: 50/50 MS: 1 CrossOver- 00:08:13.454 [2024-11-09 17:17:33.163615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.454 [2024-11-09 17:17:33.163642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.454 [2024-11-09 17:17:33.163720] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:13.454 [2024-11-09 17:17:33.163735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.454 #86 NEW cov: 11866 ft: 15087 corp: 37/1263b lim: 50 exec/s: 86 rss: 70Mb L: 27/50 MS: 1 EraseBytes- 00:08:13.454 [2024-11-09 17:17:33.203732] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.454 [2024-11-09 17:17:33.203761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.454 [2024-11-09 17:17:33.203830] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:13.454 [2024-11-09 17:17:33.203844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.714 #87 NEW cov: 11866 ft: 15106 corp: 38/1291b lim: 50 exec/s: 87 rss: 70Mb L: 28/50 MS: 1 InsertByte- 00:08:13.714 [2024-11-09 17:17:33.243752] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.714 [2024-11-09 17:17:33.243780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.714 #88 NEW cov: 11866 ft: 15136 corp: 39/1303b lim: 50 exec/s: 88 rss: 70Mb L: 12/50 MS: 1 ChangeByte- 00:08:13.714 [2024-11-09 17:17:33.283890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.714 [2024-11-09 17:17:33.283917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.714 #90 NEW cov: 11866 ft: 15147 corp: 40/1316b lim: 50 exec/s: 90 rss: 70Mb L: 13/50 MS: 2 EraseBytes-PersAutoDict- DE: "\377\377\377\017"- 00:08:13.714 [2024-11-09 17:17:33.324561] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.714 [2024-11-09 17:17:33.324588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.714 [2024-11-09 17:17:33.324650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:13.714 [2024-11-09 17:17:33.324666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.714 [2024-11-09 17:17:33.324718] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:13.714 [2024-11-09 17:17:33.324732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.714 [2024-11-09 17:17:33.324784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:13.714 [2024-11-09 17:17:33.324799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.714 [2024-11-09 17:17:33.324852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:13.714 [2024-11-09 17:17:33.324868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:13.714 #91 NEW cov: 11866 ft: 15182 corp: 41/1366b lim: 50 exec/s: 91 rss: 70Mb L: 50/50 MS: 1 CMP- DE: "\017\000"- 00:08:13.714 [2024-11-09 17:17:33.374261] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.714 [2024-11-09 17:17:33.374287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.714 [2024-11-09 17:17:33.374354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:13.714 [2024-11-09 17:17:33.374370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.714 #92 NEW cov: 11866 ft: 15195 corp: 42/1390b lim: 50 exec/s: 92 rss: 70Mb L: 24/50 MS: 1 CrossOver- 00:08:13.714 [2024-11-09 17:17:33.414249] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.714 [2024-11-09 17:17:33.414277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.714 #93 NEW cov: 11866 ft: 15216 corp: 43/1402b lim: 50 exec/s: 46 rss: 70Mb L: 12/50 MS: 1 ChangeBit- 00:08:13.714 #93 DONE cov: 11866 ft: 15216 corp: 43/1402b lim: 50 exec/s: 46 rss: 70Mb 00:08:13.714 ###### Recommended dictionary. ###### 00:08:13.714 "\377\377\377\017" # Uses: 1 00:08:13.714 "\017\000" # Uses: 0 00:08:13.714 ###### End of recommended dictionary. ###### 00:08:13.714 Done 93 runs in 2 second(s) 00:08:13.974 17:17:33 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_21.conf 00:08:13.974 17:17:33 -- ../common.sh@72 -- # (( i++ )) 00:08:13.974 17:17:33 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:13.974 17:17:33 -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:08:13.974 17:17:33 -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:08:13.974 17:17:33 -- nvmf/run.sh@24 -- # local timen=1 00:08:13.974 17:17:33 -- nvmf/run.sh@25 -- # local core=0x1 00:08:13.974 17:17:33 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:13.974 17:17:33 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:08:13.974 17:17:33 -- nvmf/run.sh@29 -- # printf %02d 22 00:08:13.974 17:17:33 -- nvmf/run.sh@29 -- # port=4422 00:08:13.974 17:17:33 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:13.974 17:17:33 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:08:13.974 17:17:33 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:13.974 17:17:33 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 -r /var/tmp/spdk22.sock 00:08:13.974 [2024-11-09 17:17:33.598024] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:13.974 [2024-11-09 17:17:33.598100] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid375690 ] 00:08:13.974 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.233 [2024-11-09 17:17:33.776780] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.233 [2024-11-09 17:17:33.839819] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:14.233 [2024-11-09 17:17:33.839959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.233 [2024-11-09 17:17:33.897731] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.233 [2024-11-09 17:17:33.914089] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:08:14.233 INFO: Running with entropic power schedule (0xFF, 100). 00:08:14.233 INFO: Seed: 430420775 00:08:14.233 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:14.233 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:14.233 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:14.233 INFO: A corpus is not provided, starting from an empty corpus 00:08:14.233 #2 INITED exec/s: 0 rss: 61Mb 00:08:14.233 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:14.233 This may also happen if the target rejected all inputs we tried so far 00:08:14.233 [2024-11-09 17:17:33.958786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.233 [2024-11-09 17:17:33.958819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.233 [2024-11-09 17:17:33.958869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.233 [2024-11-09 17:17:33.958886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.233 [2024-11-09 17:17:33.958915] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.233 [2024-11-09 17:17:33.958931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.752 NEW_FUNC[1/672]: 0x461ed8 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:08:14.752 NEW_FUNC[2/672]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:14.752 #10 NEW cov: 11665 ft: 11666 corp: 2/58b lim: 85 exec/s: 0 rss: 68Mb L: 57/57 MS: 3 InsertByte-ChangeByte-InsertRepeatedBytes- 00:08:14.752 [2024-11-09 17:17:34.279479] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.752 [2024-11-09 17:17:34.279514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.752 [2024-11-09 17:17:34.279571] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.752 [2024-11-09 17:17:34.279588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.752 [2024-11-09 17:17:34.279622] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.752 [2024-11-09 17:17:34.279638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.752 #11 NEW cov: 11778 ft: 12105 corp: 3/124b lim: 85 exec/s: 0 rss: 68Mb L: 66/66 MS: 1 CopyPart- 00:08:14.752 [2024-11-09 17:17:34.349594] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.752 [2024-11-09 17:17:34.349625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.752 [2024-11-09 17:17:34.349657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.752 [2024-11-09 17:17:34.349674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.752 [2024-11-09 17:17:34.349705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.752 [2024-11-09 17:17:34.349721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.752 #17 NEW cov: 11784 ft: 12462 corp: 4/190b lim: 85 exec/s: 0 rss: 68Mb L: 66/66 MS: 1 ChangeByte- 00:08:14.752 [2024-11-09 17:17:34.409721] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.752 [2024-11-09 17:17:34.409751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.752 [2024-11-09 17:17:34.409799] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.752 [2024-11-09 17:17:34.409817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.752 [2024-11-09 17:17:34.409846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.752 [2024-11-09 17:17:34.409862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.752 #18 NEW cov: 11869 ft: 12711 corp: 5/247b lim: 85 exec/s: 0 rss: 68Mb L: 57/66 MS: 1 CMP- DE: "\000\000\177\031D\000\227R"- 00:08:14.752 [2024-11-09 17:17:34.459821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.752 [2024-11-09 17:17:34.459850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.752 [2024-11-09 17:17:34.459897] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.752 [2024-11-09 17:17:34.459914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.752 [2024-11-09 17:17:34.459943] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.752 [2024-11-09 17:17:34.459959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.752 #19 NEW cov: 11869 ft: 12940 corp: 6/304b lim: 85 exec/s: 0 rss: 68Mb L: 57/66 MS: 1 ChangeBit- 00:08:14.752 [2024-11-09 17:17:34.509990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.752 [2024-11-09 17:17:34.510021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.752 [2024-11-09 17:17:34.510054] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.752 [2024-11-09 17:17:34.510071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.752 [2024-11-09 17:17:34.510100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.752 [2024-11-09 17:17:34.510121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.012 #20 NEW cov: 11869 ft: 13041 corp: 7/370b lim: 85 exec/s: 0 rss: 68Mb L: 66/66 MS: 1 ShuffleBytes- 00:08:15.012 [2024-11-09 17:17:34.580180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.012 [2024-11-09 17:17:34.580209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.012 [2024-11-09 17:17:34.580256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.012 [2024-11-09 17:17:34.580273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.012 [2024-11-09 17:17:34.580303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.012 [2024-11-09 17:17:34.580321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.012 #26 NEW cov: 11869 ft: 13195 corp: 8/427b lim: 85 exec/s: 0 rss: 68Mb L: 57/66 MS: 1 ChangeBinInt- 00:08:15.012 [2024-11-09 17:17:34.650382] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.012 [2024-11-09 17:17:34.650412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.012 [2024-11-09 17:17:34.650445] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.012 [2024-11-09 17:17:34.650461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.012 [2024-11-09 17:17:34.650490] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.012 [2024-11-09 17:17:34.650506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.012 #27 NEW cov: 11869 ft: 13229 corp: 9/493b lim: 85 exec/s: 0 rss: 69Mb L: 66/66 MS: 1 ShuffleBytes- 00:08:15.012 [2024-11-09 17:17:34.700361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.012 [2024-11-09 17:17:34.700392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.012 #28 NEW cov: 11869 ft: 14128 corp: 10/526b lim: 85 exec/s: 0 rss: 69Mb L: 33/66 MS: 1 EraseBytes- 00:08:15.012 [2024-11-09 17:17:34.770776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.012 [2024-11-09 17:17:34.770806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.012 [2024-11-09 17:17:34.770839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.012 [2024-11-09 17:17:34.770856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.012 [2024-11-09 17:17:34.770885] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.012 [2024-11-09 17:17:34.770901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.012 [2024-11-09 17:17:34.770929] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:15.012 [2024-11-09 17:17:34.770944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.271 #29 NEW cov: 11869 ft: 14512 corp: 11/610b lim: 85 exec/s: 0 rss: 69Mb L: 84/84 MS: 1 CopyPart- 00:08:15.271 [2024-11-09 17:17:34.830815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.271 [2024-11-09 17:17:34.830844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.271 [2024-11-09 17:17:34.830895] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.271 [2024-11-09 17:17:34.830912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.271 [2024-11-09 17:17:34.830941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.271 [2024-11-09 17:17:34.830957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.271 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:15.271 #30 NEW cov: 11886 ft: 14602 corp: 12/667b lim: 85 exec/s: 0 rss: 69Mb L: 57/84 MS: 1 ChangeBinInt- 00:08:15.271 [2024-11-09 17:17:34.891010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.271 [2024-11-09 17:17:34.891040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.271 [2024-11-09 17:17:34.891072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.271 [2024-11-09 17:17:34.891090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.271 [2024-11-09 17:17:34.891119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.271 [2024-11-09 17:17:34.891135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.271 #31 NEW cov: 11886 ft: 14627 corp: 13/724b lim: 85 exec/s: 0 rss: 69Mb L: 57/84 MS: 1 ChangeBinInt- 00:08:15.271 [2024-11-09 17:17:34.941112] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.271 [2024-11-09 17:17:34.941141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.271 [2024-11-09 17:17:34.941188] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.271 [2024-11-09 17:17:34.941204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.271 [2024-11-09 17:17:34.941234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.271 [2024-11-09 17:17:34.941249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.271 #32 NEW cov: 11886 ft: 14640 corp: 14/790b lim: 85 exec/s: 32 rss: 69Mb L: 66/84 MS: 1 ChangeByte- 00:08:15.271 [2024-11-09 17:17:34.991226] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.271 [2024-11-09 17:17:34.991255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.271 [2024-11-09 17:17:34.991302] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.271 [2024-11-09 17:17:34.991319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.271 [2024-11-09 17:17:34.991349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.271 [2024-11-09 17:17:34.991365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.531 #33 NEW cov: 11886 ft: 14668 corp: 15/847b lim: 85 exec/s: 33 rss: 69Mb L: 57/84 MS: 1 ChangeByte- 00:08:15.531 [2024-11-09 17:17:35.061454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.531 [2024-11-09 17:17:35.061483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.531 [2024-11-09 17:17:35.061547] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.531 [2024-11-09 17:17:35.061565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.532 [2024-11-09 17:17:35.061594] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.532 [2024-11-09 17:17:35.061610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.532 #39 NEW cov: 11886 ft: 14727 corp: 16/913b lim: 85 exec/s: 39 rss: 69Mb L: 66/84 MS: 1 ChangeByte- 00:08:15.532 [2024-11-09 17:17:35.121653] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.532 [2024-11-09 17:17:35.121682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.532 [2024-11-09 17:17:35.121729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.532 [2024-11-09 17:17:35.121746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.532 [2024-11-09 17:17:35.121775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.532 [2024-11-09 17:17:35.121791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.532 #40 NEW cov: 11886 ft: 14738 corp: 17/970b lim: 85 exec/s: 40 rss: 69Mb L: 57/84 MS: 1 PersAutoDict- DE: "\000\000\177\031D\000\227R"- 00:08:15.532 [2024-11-09 17:17:35.171759] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.532 [2024-11-09 17:17:35.171788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.532 [2024-11-09 17:17:35.171835] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.532 [2024-11-09 17:17:35.171853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.532 [2024-11-09 17:17:35.171882] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.532 [2024-11-09 17:17:35.171898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.532 #41 NEW cov: 11886 ft: 14747 corp: 18/1027b lim: 85 exec/s: 41 rss: 69Mb L: 57/84 MS: 1 CopyPart- 00:08:15.532 [2024-11-09 17:17:35.231949] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.532 [2024-11-09 17:17:35.231977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.532 [2024-11-09 17:17:35.232026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.532 [2024-11-09 17:17:35.232044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.532 [2024-11-09 17:17:35.232073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.532 [2024-11-09 17:17:35.232088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.532 #42 NEW cov: 11886 ft: 14768 corp: 19/1085b lim: 85 exec/s: 42 rss: 69Mb L: 58/84 MS: 1 InsertByte- 00:08:15.532 [2024-11-09 17:17:35.281918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.532 [2024-11-09 17:17:35.281947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.791 #43 NEW cov: 11886 ft: 14792 corp: 20/1117b lim: 85 exec/s: 43 rss: 69Mb L: 32/84 MS: 1 EraseBytes- 00:08:15.791 [2024-11-09 17:17:35.342230] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.791 [2024-11-09 17:17:35.342258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.791 [2024-11-09 17:17:35.342305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.791 [2024-11-09 17:17:35.342322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.791 [2024-11-09 17:17:35.342351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.791 [2024-11-09 17:17:35.342366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.791 [2024-11-09 17:17:35.342394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:15.791 [2024-11-09 17:17:35.342410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.791 #44 NEW cov: 11886 ft: 14923 corp: 21/1191b lim: 85 exec/s: 44 rss: 69Mb L: 74/84 MS: 1 CopyPart- 00:08:15.791 [2024-11-09 17:17:35.402315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.791 [2024-11-09 17:17:35.402344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.791 [2024-11-09 17:17:35.402390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.791 [2024-11-09 17:17:35.402407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.791 [2024-11-09 17:17:35.402437] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.791 [2024-11-09 17:17:35.402453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.791 #45 NEW cov: 11886 ft: 14953 corp: 22/1249b lim: 85 exec/s: 45 rss: 69Mb L: 58/84 MS: 1 ChangeBinInt- 00:08:15.791 [2024-11-09 17:17:35.472478] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.791 [2024-11-09 17:17:35.472506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.791 [2024-11-09 17:17:35.472560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.791 [2024-11-09 17:17:35.472578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.791 #46 NEW cov: 11886 ft: 15256 corp: 23/1291b lim: 85 exec/s: 46 rss: 69Mb L: 42/84 MS: 1 EraseBytes- 00:08:15.791 [2024-11-09 17:17:35.532684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.791 [2024-11-09 17:17:35.532713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.791 [2024-11-09 17:17:35.532745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.791 [2024-11-09 17:17:35.532762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.791 [2024-11-09 17:17:35.532792] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.791 [2024-11-09 17:17:35.532808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.051 #47 NEW cov: 11886 ft: 15276 corp: 24/1357b lim: 85 exec/s: 47 rss: 69Mb L: 66/84 MS: 1 ChangeByte- 00:08:16.051 [2024-11-09 17:17:35.602910] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:16.051 [2024-11-09 17:17:35.602942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.051 [2024-11-09 17:17:35.602990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:16.051 [2024-11-09 17:17:35.603007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.051 [2024-11-09 17:17:35.603037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:16.051 [2024-11-09 17:17:35.603053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.051 #48 NEW cov: 11886 ft: 15292 corp: 25/1423b lim: 85 exec/s: 48 rss: 69Mb L: 66/84 MS: 1 ShuffleBytes- 00:08:16.051 [2024-11-09 17:17:35.653013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:16.051 [2024-11-09 17:17:35.653042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.051 [2024-11-09 17:17:35.653089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:16.051 [2024-11-09 17:17:35.653106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.051 [2024-11-09 17:17:35.653135] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:16.051 [2024-11-09 17:17:35.653151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.051 #49 NEW cov: 11886 ft: 15305 corp: 26/1480b lim: 85 exec/s: 49 rss: 69Mb L: 57/84 MS: 1 ChangeByte- 00:08:16.051 [2024-11-09 17:17:35.723202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:16.051 [2024-11-09 17:17:35.723232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.051 [2024-11-09 17:17:35.723279] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:16.051 [2024-11-09 17:17:35.723296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.051 [2024-11-09 17:17:35.723327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:16.051 [2024-11-09 17:17:35.723344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.051 #50 NEW cov: 11886 ft: 15330 corp: 27/1544b lim: 85 exec/s: 50 rss: 69Mb L: 64/84 MS: 1 InsertRepeatedBytes- 00:08:16.051 [2024-11-09 17:17:35.773295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:16.051 [2024-11-09 17:17:35.773324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.051 [2024-11-09 17:17:35.773372] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:16.051 [2024-11-09 17:17:35.773389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.051 [2024-11-09 17:17:35.773418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:16.051 [2024-11-09 17:17:35.773434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.051 #51 NEW cov: 11886 ft: 15338 corp: 28/1602b lim: 85 exec/s: 51 rss: 69Mb L: 58/84 MS: 1 ChangeByte- 00:08:16.311 [2024-11-09 17:17:35.823470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:16.311 [2024-11-09 17:17:35.823509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.311 [2024-11-09 17:17:35.823550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:16.311 [2024-11-09 17:17:35.823568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.311 [2024-11-09 17:17:35.823597] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:16.311 [2024-11-09 17:17:35.823613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.311 #52 NEW cov: 11893 ft: 15364 corp: 29/1660b lim: 85 exec/s: 52 rss: 69Mb L: 58/84 MS: 1 CMP- DE: "\363\032\250\211\353\277\204\000"- 00:08:16.311 [2024-11-09 17:17:35.873595] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:16.311 [2024-11-09 17:17:35.873632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.311 [2024-11-09 17:17:35.873666] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:16.311 [2024-11-09 17:17:35.873684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.311 [2024-11-09 17:17:35.873712] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:16.311 [2024-11-09 17:17:35.873728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.311 #53 NEW cov: 11893 ft: 15368 corp: 30/1718b lim: 85 exec/s: 53 rss: 69Mb L: 58/84 MS: 1 CMP- DE: "\001\000\377\377"- 00:08:16.311 [2024-11-09 17:17:35.934199] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:16.311 [2024-11-09 17:17:35.934227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.311 #54 NEW cov: 11893 ft: 15417 corp: 31/1747b lim: 85 exec/s: 27 rss: 69Mb L: 29/84 MS: 1 CrossOver- 00:08:16.311 #54 DONE cov: 11893 ft: 15417 corp: 31/1747b lim: 85 exec/s: 27 rss: 69Mb 00:08:16.311 ###### Recommended dictionary. ###### 00:08:16.311 "\000\000\177\031D\000\227R" # Uses: 1 00:08:16.311 "\363\032\250\211\353\277\204\000" # Uses: 0 00:08:16.311 "\001\000\377\377" # Uses: 0 00:08:16.311 ###### End of recommended dictionary. ###### 00:08:16.311 Done 54 runs in 2 second(s) 00:08:16.570 17:17:36 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_22.conf 00:08:16.570 17:17:36 -- ../common.sh@72 -- # (( i++ )) 00:08:16.570 17:17:36 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:16.570 17:17:36 -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:08:16.570 17:17:36 -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:08:16.570 17:17:36 -- nvmf/run.sh@24 -- # local timen=1 00:08:16.571 17:17:36 -- nvmf/run.sh@25 -- # local core=0x1 00:08:16.571 17:17:36 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:16.571 17:17:36 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:08:16.571 17:17:36 -- nvmf/run.sh@29 -- # printf %02d 23 00:08:16.571 17:17:36 -- nvmf/run.sh@29 -- # port=4423 00:08:16.571 17:17:36 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:16.571 17:17:36 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:08:16.571 17:17:36 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:16.571 17:17:36 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 -r /var/tmp/spdk23.sock 00:08:16.571 [2024-11-09 17:17:36.126260] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:16.571 [2024-11-09 17:17:36.126334] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid376034 ] 00:08:16.571 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.571 [2024-11-09 17:17:36.310063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.830 [2024-11-09 17:17:36.374965] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:16.830 [2024-11-09 17:17:36.375111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.830 [2024-11-09 17:17:36.433079] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.830 [2024-11-09 17:17:36.449438] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:08:16.830 INFO: Running with entropic power schedule (0xFF, 100). 00:08:16.830 INFO: Seed: 2965413141 00:08:16.830 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:16.830 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:16.830 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:16.830 INFO: A corpus is not provided, starting from an empty corpus 00:08:16.830 #2 INITED exec/s: 0 rss: 61Mb 00:08:16.830 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:16.830 This may also happen if the target rejected all inputs we tried so far 00:08:16.830 [2024-11-09 17:17:36.525157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.830 [2024-11-09 17:17:36.525201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.089 NEW_FUNC[1/671]: 0x465118 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:08:17.089 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:17.089 #15 NEW cov: 11598 ft: 11599 corp: 2/7b lim: 25 exec/s: 0 rss: 68Mb L: 6/6 MS: 3 ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:08:17.089 [2024-11-09 17:17:36.856070] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.089 [2024-11-09 17:17:36.856114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.349 #16 NEW cov: 11711 ft: 12189 corp: 3/13b lim: 25 exec/s: 0 rss: 68Mb L: 6/6 MS: 1 ChangeByte- 00:08:17.349 [2024-11-09 17:17:36.905704] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.349 [2024-11-09 17:17:36.905731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.349 #17 NEW cov: 11717 ft: 12538 corp: 4/19b lim: 25 exec/s: 0 rss: 68Mb L: 6/6 MS: 1 ChangeBinInt- 00:08:17.349 [2024-11-09 17:17:36.946757] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.349 [2024-11-09 17:17:36.946790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.349 [2024-11-09 17:17:36.946889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.349 [2024-11-09 17:17:36.946921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.349 [2024-11-09 17:17:36.947037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.349 [2024-11-09 17:17:36.947061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.349 [2024-11-09 17:17:36.947177] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:17.349 [2024-11-09 17:17:36.947199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.349 #18 NEW cov: 11802 ft: 13454 corp: 5/42b lim: 25 exec/s: 0 rss: 68Mb L: 23/23 MS: 1 InsertRepeatedBytes- 00:08:17.349 [2024-11-09 17:17:36.996187] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.349 [2024-11-09 17:17:36.996213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.349 #19 NEW cov: 11802 ft: 13578 corp: 6/47b lim: 25 exec/s: 0 rss: 68Mb L: 5/23 MS: 1 EraseBytes- 00:08:17.349 [2024-11-09 17:17:37.036392] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.349 [2024-11-09 17:17:37.036417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.349 #20 NEW cov: 11802 ft: 13658 corp: 7/53b lim: 25 exec/s: 0 rss: 68Mb L: 6/23 MS: 1 ChangeByte- 00:08:17.349 [2024-11-09 17:17:37.076324] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.349 [2024-11-09 17:17:37.076356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.349 [2024-11-09 17:17:37.076482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.349 [2024-11-09 17:17:37.076502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.349 #22 NEW cov: 11802 ft: 14044 corp: 8/66b lim: 25 exec/s: 0 rss: 69Mb L: 13/23 MS: 2 EraseBytes-InsertRepeatedBytes- 00:08:17.608 [2024-11-09 17:17:37.126796] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.608 [2024-11-09 17:17:37.126823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.608 #23 NEW cov: 11802 ft: 14067 corp: 9/71b lim: 25 exec/s: 0 rss: 69Mb L: 5/23 MS: 1 ShuffleBytes- 00:08:17.608 [2024-11-09 17:17:37.166905] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.608 [2024-11-09 17:17:37.166935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.608 [2024-11-09 17:17:37.167062] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.608 [2024-11-09 17:17:37.167086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.608 [2024-11-09 17:17:37.167204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.608 [2024-11-09 17:17:37.167225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.608 #24 NEW cov: 11802 ft: 14315 corp: 10/87b lim: 25 exec/s: 0 rss: 69Mb L: 16/23 MS: 1 InsertRepeatedBytes- 00:08:17.608 [2024-11-09 17:17:37.207728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.608 [2024-11-09 17:17:37.207761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.608 [2024-11-09 17:17:37.207850] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.608 [2024-11-09 17:17:37.207874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.608 [2024-11-09 17:17:37.208004] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.609 [2024-11-09 17:17:37.208029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.609 [2024-11-09 17:17:37.208155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:17.609 [2024-11-09 17:17:37.208176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.609 [2024-11-09 17:17:37.208303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:08:17.609 [2024-11-09 17:17:37.208325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:17.609 #25 NEW cov: 11802 ft: 14370 corp: 11/112b lim: 25 exec/s: 0 rss: 69Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:08:17.609 [2024-11-09 17:17:37.257165] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.609 [2024-11-09 17:17:37.257197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.609 [2024-11-09 17:17:37.257330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.609 [2024-11-09 17:17:37.257352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.609 #26 NEW cov: 11802 ft: 14399 corp: 12/124b lim: 25 exec/s: 0 rss: 69Mb L: 12/25 MS: 1 CrossOver- 00:08:17.609 [2024-11-09 17:17:37.296786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.609 [2024-11-09 17:17:37.296814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.609 #27 NEW cov: 11802 ft: 14468 corp: 13/130b lim: 25 exec/s: 0 rss: 69Mb L: 6/25 MS: 1 ChangeBinInt- 00:08:17.609 [2024-11-09 17:17:37.348051] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.609 [2024-11-09 17:17:37.348081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.609 [2024-11-09 17:17:37.348202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.609 [2024-11-09 17:17:37.348221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.609 [2024-11-09 17:17:37.348339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.609 [2024-11-09 17:17:37.348362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.609 [2024-11-09 17:17:37.348485] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:17.609 [2024-11-09 17:17:37.348506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.609 #29 NEW cov: 11802 ft: 14496 corp: 14/154b lim: 25 exec/s: 0 rss: 69Mb L: 24/25 MS: 2 EraseBytes-InsertRepeatedBytes- 00:08:17.868 [2024-11-09 17:17:37.387151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.868 [2024-11-09 17:17:37.387180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.868 [2024-11-09 17:17:37.387310] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.868 [2024-11-09 17:17:37.387333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.868 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:17.868 #35 NEW cov: 11825 ft: 14541 corp: 15/167b lim: 25 exec/s: 0 rss: 69Mb L: 13/25 MS: 1 ChangeBinInt- 00:08:17.868 [2024-11-09 17:17:37.428232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.869 [2024-11-09 17:17:37.428266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.869 [2024-11-09 17:17:37.428388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.869 [2024-11-09 17:17:37.428410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.869 [2024-11-09 17:17:37.428532] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.869 [2024-11-09 17:17:37.428567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.869 [2024-11-09 17:17:37.428677] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:17.869 [2024-11-09 17:17:37.428700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.869 #36 NEW cov: 11825 ft: 14571 corp: 16/188b lim: 25 exec/s: 0 rss: 69Mb L: 21/25 MS: 1 InsertRepeatedBytes- 00:08:17.869 [2024-11-09 17:17:37.468081] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.869 [2024-11-09 17:17:37.468112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.869 [2024-11-09 17:17:37.468196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.869 [2024-11-09 17:17:37.468221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.869 [2024-11-09 17:17:37.468339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.869 [2024-11-09 17:17:37.468363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.869 #38 NEW cov: 11825 ft: 14590 corp: 17/204b lim: 25 exec/s: 38 rss: 69Mb L: 16/25 MS: 2 EraseBytes-CrossOver- 00:08:17.869 [2024-11-09 17:17:37.508484] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.869 [2024-11-09 17:17:37.508513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.869 [2024-11-09 17:17:37.508599] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.869 [2024-11-09 17:17:37.508620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.869 [2024-11-09 17:17:37.508737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.869 [2024-11-09 17:17:37.508760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.869 [2024-11-09 17:17:37.508880] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:17.869 [2024-11-09 17:17:37.508902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.869 #39 NEW cov: 11825 ft: 14603 corp: 18/228b lim: 25 exec/s: 39 rss: 69Mb L: 24/25 MS: 1 InsertByte- 00:08:17.869 [2024-11-09 17:17:37.547940] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.869 [2024-11-09 17:17:37.547964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.869 #40 NEW cov: 11825 ft: 14624 corp: 19/234b lim: 25 exec/s: 40 rss: 69Mb L: 6/25 MS: 1 CopyPart- 00:08:17.869 [2024-11-09 17:17:37.588301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.869 [2024-11-09 17:17:37.588332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.869 [2024-11-09 17:17:37.588448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.869 [2024-11-09 17:17:37.588472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.869 [2024-11-09 17:17:37.588606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.869 [2024-11-09 17:17:37.588627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.869 #41 NEW cov: 11825 ft: 14697 corp: 20/250b lim: 25 exec/s: 41 rss: 69Mb L: 16/25 MS: 1 CrossOver- 00:08:17.869 [2024-11-09 17:17:37.628617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.869 [2024-11-09 17:17:37.628648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.869 [2024-11-09 17:17:37.628729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.869 [2024-11-09 17:17:37.628755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.869 [2024-11-09 17:17:37.628882] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.869 [2024-11-09 17:17:37.628904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.869 [2024-11-09 17:17:37.629034] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:17.869 [2024-11-09 17:17:37.629060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.128 #42 NEW cov: 11825 ft: 14723 corp: 21/272b lim: 25 exec/s: 42 rss: 69Mb L: 22/25 MS: 1 InsertRepeatedBytes- 00:08:18.128 [2024-11-09 17:17:37.678376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.128 [2024-11-09 17:17:37.678402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.128 #43 NEW cov: 11825 ft: 14743 corp: 22/277b lim: 25 exec/s: 43 rss: 69Mb L: 5/25 MS: 1 ChangeByte- 00:08:18.128 [2024-11-09 17:17:37.729049] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.128 [2024-11-09 17:17:37.729079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.128 [2024-11-09 17:17:37.729196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:18.128 [2024-11-09 17:17:37.729221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.128 [2024-11-09 17:17:37.729346] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:18.128 [2024-11-09 17:17:37.729367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.128 [2024-11-09 17:17:37.729491] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:18.128 [2024-11-09 17:17:37.729514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.128 #47 NEW cov: 11825 ft: 14754 corp: 23/297b lim: 25 exec/s: 47 rss: 69Mb L: 20/25 MS: 4 EraseBytes-CopyPart-ChangeBinInt-InsertRepeatedBytes- 00:08:18.128 [2024-11-09 17:17:37.768460] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.128 [2024-11-09 17:17:37.768490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.128 [2024-11-09 17:17:37.768627] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:18.128 [2024-11-09 17:17:37.768653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.128 #48 NEW cov: 11825 ft: 14772 corp: 24/310b lim: 25 exec/s: 48 rss: 70Mb L: 13/25 MS: 1 ChangeBinInt- 00:08:18.128 [2024-11-09 17:17:37.808594] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.128 [2024-11-09 17:17:37.808622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.128 #49 NEW cov: 11825 ft: 14787 corp: 25/317b lim: 25 exec/s: 49 rss: 70Mb L: 7/25 MS: 1 InsertByte- 00:08:18.128 [2024-11-09 17:17:37.838761] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.128 [2024-11-09 17:17:37.838794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.128 [2024-11-09 17:17:37.838923] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:18.128 [2024-11-09 17:17:37.838945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.128 #50 NEW cov: 11825 ft: 14796 corp: 26/328b lim: 25 exec/s: 50 rss: 70Mb L: 11/25 MS: 1 CrossOver- 00:08:18.128 [2024-11-09 17:17:37.878914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.128 [2024-11-09 17:17:37.878946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.129 [2024-11-09 17:17:37.879042] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:18.129 [2024-11-09 17:17:37.879062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.129 [2024-11-09 17:17:37.879191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:18.129 [2024-11-09 17:17:37.879215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.388 #51 NEW cov: 11825 ft: 14814 corp: 27/347b lim: 25 exec/s: 51 rss: 70Mb L: 19/25 MS: 1 InsertRepeatedBytes- 00:08:18.388 [2024-11-09 17:17:37.929177] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.388 [2024-11-09 17:17:37.929210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.388 [2024-11-09 17:17:37.929321] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:18.388 [2024-11-09 17:17:37.929341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.388 #52 NEW cov: 11825 ft: 14824 corp: 28/358b lim: 25 exec/s: 52 rss: 70Mb L: 11/25 MS: 1 ChangeBit- 00:08:18.388 [2024-11-09 17:17:37.969818] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.388 [2024-11-09 17:17:37.969853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.388 [2024-11-09 17:17:37.969952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:18.388 [2024-11-09 17:17:37.969977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.388 [2024-11-09 17:17:37.970102] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:18.388 [2024-11-09 17:17:37.970126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.388 [2024-11-09 17:17:37.970252] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:18.388 [2024-11-09 17:17:37.970274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.388 [2024-11-09 17:17:37.970403] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:08:18.388 [2024-11-09 17:17:37.970429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:18.388 #53 NEW cov: 11825 ft: 14861 corp: 29/383b lim: 25 exec/s: 53 rss: 70Mb L: 25/25 MS: 1 CopyPart- 00:08:18.388 [2024-11-09 17:17:38.030132] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.388 [2024-11-09 17:17:38.030164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.388 [2024-11-09 17:17:38.030257] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:18.388 [2024-11-09 17:17:38.030281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.388 [2024-11-09 17:17:38.030413] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:18.388 [2024-11-09 17:17:38.030437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.388 [2024-11-09 17:17:38.030564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:18.388 [2024-11-09 17:17:38.030590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.388 [2024-11-09 17:17:38.030719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:08:18.388 [2024-11-09 17:17:38.030745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:18.388 #54 NEW cov: 11825 ft: 14917 corp: 30/408b lim: 25 exec/s: 54 rss: 70Mb L: 25/25 MS: 1 ChangeASCIIInt- 00:08:18.388 [2024-11-09 17:17:38.079936] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.388 [2024-11-09 17:17:38.079967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.388 [2024-11-09 17:17:38.080094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:18.388 [2024-11-09 17:17:38.080118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.388 [2024-11-09 17:17:38.080246] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:18.388 [2024-11-09 17:17:38.080268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.388 #55 NEW cov: 11825 ft: 14979 corp: 31/427b lim: 25 exec/s: 55 rss: 70Mb L: 19/25 MS: 1 ChangeBinInt- 00:08:18.388 [2024-11-09 17:17:38.139683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.388 [2024-11-09 17:17:38.139710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.648 #56 NEW cov: 11825 ft: 15004 corp: 32/433b lim: 25 exec/s: 56 rss: 70Mb L: 6/25 MS: 1 CrossOver- 00:08:18.648 [2024-11-09 17:17:38.180167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.648 [2024-11-09 17:17:38.180199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.648 [2024-11-09 17:17:38.180331] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:18.648 [2024-11-09 17:17:38.180350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.648 [2024-11-09 17:17:38.180475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:18.648 [2024-11-09 17:17:38.180496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.648 [2024-11-09 17:17:38.180636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:18.648 [2024-11-09 17:17:38.180658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.648 #57 NEW cov: 11825 ft: 15011 corp: 33/457b lim: 25 exec/s: 57 rss: 70Mb L: 24/25 MS: 1 InsertByte- 00:08:18.648 [2024-11-09 17:17:38.219984] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.648 [2024-11-09 17:17:38.220020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.648 [2024-11-09 17:17:38.220157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:18.648 [2024-11-09 17:17:38.220179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.648 #58 NEW cov: 11825 ft: 15030 corp: 34/468b lim: 25 exec/s: 58 rss: 70Mb L: 11/25 MS: 1 ChangeBit- 00:08:18.648 [2024-11-09 17:17:38.270019] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.648 [2024-11-09 17:17:38.270045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.648 [2024-11-09 17:17:38.270191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:18.648 [2024-11-09 17:17:38.270212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.648 #59 NEW cov: 11825 ft: 15039 corp: 35/479b lim: 25 exec/s: 59 rss: 70Mb L: 11/25 MS: 1 EraseBytes- 00:08:18.648 [2024-11-09 17:17:38.320335] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.648 [2024-11-09 17:17:38.320370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.648 [2024-11-09 17:17:38.320499] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:18.648 [2024-11-09 17:17:38.320519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.648 [2024-11-09 17:17:38.320666] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:18.648 [2024-11-09 17:17:38.320686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.648 [2024-11-09 17:17:38.320817] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:18.648 [2024-11-09 17:17:38.320839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.648 #60 NEW cov: 11825 ft: 15084 corp: 36/499b lim: 25 exec/s: 60 rss: 70Mb L: 20/25 MS: 1 InsertRepeatedBytes- 00:08:18.648 [2024-11-09 17:17:38.360431] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.648 [2024-11-09 17:17:38.360463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.648 [2024-11-09 17:17:38.360602] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:18.648 [2024-11-09 17:17:38.360626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.648 [2024-11-09 17:17:38.360753] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:18.648 [2024-11-09 17:17:38.360774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.648 [2024-11-09 17:17:38.360907] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:18.648 [2024-11-09 17:17:38.360928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.648 #61 NEW cov: 11825 ft: 15099 corp: 37/523b lim: 25 exec/s: 61 rss: 70Mb L: 24/25 MS: 1 ShuffleBytes- 00:08:18.648 [2024-11-09 17:17:38.400243] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.648 [2024-11-09 17:17:38.400273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.648 [2024-11-09 17:17:38.400404] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:18.648 [2024-11-09 17:17:38.400430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.908 #62 NEW cov: 11825 ft: 15105 corp: 38/534b lim: 25 exec/s: 62 rss: 70Mb L: 11/25 MS: 1 ChangeByte- 00:08:18.908 [2024-11-09 17:17:38.440925] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.908 [2024-11-09 17:17:38.440956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.908 [2024-11-09 17:17:38.441054] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:18.908 [2024-11-09 17:17:38.441072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.908 [2024-11-09 17:17:38.441209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:18.908 [2024-11-09 17:17:38.441233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.908 [2024-11-09 17:17:38.441362] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:18.908 [2024-11-09 17:17:38.441385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.908 #63 NEW cov: 11825 ft: 15107 corp: 39/555b lim: 25 exec/s: 63 rss: 70Mb L: 21/25 MS: 1 EraseBytes- 00:08:18.908 [2024-11-09 17:17:38.491073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.908 [2024-11-09 17:17:38.491108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.908 [2024-11-09 17:17:38.491229] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:18.908 [2024-11-09 17:17:38.491249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.908 [2024-11-09 17:17:38.491366] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:18.908 [2024-11-09 17:17:38.491389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.908 #64 pulse cov: 11825 ft: 15110 corp: 39/555b lim: 25 exec/s: 32 rss: 70Mb 00:08:18.908 #64 NEW cov: 11825 ft: 15110 corp: 40/574b lim: 25 exec/s: 32 rss: 70Mb L: 19/25 MS: 1 ChangeByte- 00:08:18.908 #64 DONE cov: 11825 ft: 15110 corp: 40/574b lim: 25 exec/s: 32 rss: 70Mb 00:08:18.908 Done 64 runs in 2 second(s) 00:08:18.908 17:17:38 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_23.conf 00:08:18.908 17:17:38 -- ../common.sh@72 -- # (( i++ )) 00:08:18.908 17:17:38 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:18.908 17:17:38 -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:08:18.908 17:17:38 -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:08:18.908 17:17:38 -- nvmf/run.sh@24 -- # local timen=1 00:08:18.908 17:17:38 -- nvmf/run.sh@25 -- # local core=0x1 00:08:18.908 17:17:38 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:18.908 17:17:38 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:08:18.908 17:17:38 -- nvmf/run.sh@29 -- # printf %02d 24 00:08:18.908 17:17:38 -- nvmf/run.sh@29 -- # port=4424 00:08:18.908 17:17:38 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:18.908 17:17:38 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:08:18.908 17:17:38 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:18.908 17:17:38 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 -r /var/tmp/spdk24.sock 00:08:18.908 [2024-11-09 17:17:38.675203] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:18.908 [2024-11-09 17:17:38.675268] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid376568 ] 00:08:19.168 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.168 [2024-11-09 17:17:38.850256] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.168 [2024-11-09 17:17:38.912830] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:19.168 [2024-11-09 17:17:38.912970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.427 [2024-11-09 17:17:38.971303] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.427 [2024-11-09 17:17:38.987640] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:08:19.427 INFO: Running with entropic power schedule (0xFF, 100). 00:08:19.427 INFO: Seed: 1207457813 00:08:19.427 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:19.427 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:19.427 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:19.427 INFO: A corpus is not provided, starting from an empty corpus 00:08:19.427 #2 INITED exec/s: 0 rss: 60Mb 00:08:19.427 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:19.427 This may also happen if the target rejected all inputs we tried so far 00:08:19.427 [2024-11-09 17:17:39.032835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.427 [2024-11-09 17:17:39.032867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.427 [2024-11-09 17:17:39.032938] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.427 [2024-11-09 17:17:39.032954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.686 NEW_FUNC[1/672]: 0x466208 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:08:19.686 NEW_FUNC[2/672]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:19.686 #7 NEW cov: 11670 ft: 11669 corp: 2/50b lim: 100 exec/s: 0 rss: 68Mb L: 49/49 MS: 5 ShuffleBytes-ShuffleBytes-CopyPart-ChangeByte-InsertRepeatedBytes- 00:08:19.686 [2024-11-09 17:17:39.333533] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:940422246844665101 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.686 [2024-11-09 17:17:39.333568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.686 [2024-11-09 17:17:39.333637] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.686 [2024-11-09 17:17:39.333657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.686 #8 NEW cov: 11783 ft: 12143 corp: 3/106b lim: 100 exec/s: 0 rss: 68Mb L: 56/56 MS: 1 InsertRepeatedBytes- 00:08:19.686 [2024-11-09 17:17:39.373581] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:940422246844665101 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.686 [2024-11-09 17:17:39.373610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.686 [2024-11-09 17:17:39.373662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.686 [2024-11-09 17:17:39.373677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.686 #14 NEW cov: 11789 ft: 12392 corp: 4/163b lim: 100 exec/s: 0 rss: 68Mb L: 57/57 MS: 1 InsertByte- 00:08:19.686 [2024-11-09 17:17:39.413663] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:940422246844665101 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.686 [2024-11-09 17:17:39.413691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.686 [2024-11-09 17:17:39.413731] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.686 [2024-11-09 17:17:39.413746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.686 #15 NEW cov: 11874 ft: 12678 corp: 5/220b lim: 100 exec/s: 0 rss: 68Mb L: 57/57 MS: 1 InsertByte- 00:08:19.686 [2024-11-09 17:17:39.453822] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:940422246844665101 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.686 [2024-11-09 17:17:39.453851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.686 [2024-11-09 17:17:39.453892] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.686 [2024-11-09 17:17:39.453907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.945 #16 NEW cov: 11874 ft: 12752 corp: 6/277b lim: 100 exec/s: 0 rss: 68Mb L: 57/57 MS: 1 ShuffleBytes- 00:08:19.945 [2024-11-09 17:17:39.493908] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:940422246844665101 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.945 [2024-11-09 17:17:39.493935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.945 [2024-11-09 17:17:39.494004] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.945 [2024-11-09 17:17:39.494020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.945 #17 NEW cov: 11874 ft: 12861 corp: 7/335b lim: 100 exec/s: 0 rss: 68Mb L: 58/58 MS: 1 InsertByte- 00:08:19.945 [2024-11-09 17:17:39.534036] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:940422246844665101 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.945 [2024-11-09 17:17:39.534062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.945 [2024-11-09 17:17:39.534117] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17506321822957565170 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.945 [2024-11-09 17:17:39.534133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.945 #18 NEW cov: 11874 ft: 12912 corp: 8/392b lim: 100 exec/s: 0 rss: 68Mb L: 57/58 MS: 1 ChangeBinInt- 00:08:19.945 [2024-11-09 17:17:39.574130] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.945 [2024-11-09 17:17:39.574157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.945 [2024-11-09 17:17:39.574210] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.945 [2024-11-09 17:17:39.574225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.945 #19 NEW cov: 11874 ft: 12931 corp: 9/441b lim: 100 exec/s: 0 rss: 68Mb L: 49/58 MS: 1 ShuffleBytes- 00:08:19.945 [2024-11-09 17:17:39.614539] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:10561665234359194258 len:37523 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.945 [2024-11-09 17:17:39.614566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.945 [2024-11-09 17:17:39.614613] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:10561665234359194258 len:37523 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.945 [2024-11-09 17:17:39.614629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.945 [2024-11-09 17:17:39.614680] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.945 [2024-11-09 17:17:39.614712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.945 [2024-11-09 17:17:39.614766] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:17506321822972703474 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.945 [2024-11-09 17:17:39.614781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.945 #20 NEW cov: 11874 ft: 13410 corp: 10/537b lim: 100 exec/s: 0 rss: 69Mb L: 96/96 MS: 1 InsertRepeatedBytes- 00:08:19.945 [2024-11-09 17:17:39.654649] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:940422246844665101 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.945 [2024-11-09 17:17:39.654677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.945 [2024-11-09 17:17:39.654733] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.946 [2024-11-09 17:17:39.654749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.946 [2024-11-09 17:17:39.654799] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.946 [2024-11-09 17:17:39.654815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.946 [2024-11-09 17:17:39.654867] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:940422246894996749 len:3565 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.946 [2024-11-09 17:17:39.654881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.946 #21 NEW cov: 11874 ft: 13512 corp: 11/629b lim: 100 exec/s: 0 rss: 69Mb L: 92/96 MS: 1 CopyPart- 00:08:19.946 [2024-11-09 17:17:39.694455] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.946 [2024-11-09 17:17:39.694486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.946 [2024-11-09 17:17:39.694544] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.946 [2024-11-09 17:17:39.694575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.204 #22 NEW cov: 11874 ft: 13571 corp: 12/678b lim: 100 exec/s: 0 rss: 69Mb L: 49/96 MS: 1 ShuffleBytes- 00:08:20.204 [2024-11-09 17:17:39.734644] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:940422246844665101 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.204 [2024-11-09 17:17:39.734673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.204 [2024-11-09 17:17:39.734738] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.204 [2024-11-09 17:17:39.734754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.204 #23 NEW cov: 11874 ft: 13599 corp: 13/736b lim: 100 exec/s: 0 rss: 69Mb L: 58/96 MS: 1 CrossOver- 00:08:20.204 [2024-11-09 17:17:39.774746] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8589934592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.204 [2024-11-09 17:17:39.774774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.204 [2024-11-09 17:17:39.774828] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.204 [2024-11-09 17:17:39.774844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.205 #24 NEW cov: 11874 ft: 13694 corp: 14/785b lim: 100 exec/s: 0 rss: 69Mb L: 49/96 MS: 1 ChangeBit- 00:08:20.205 [2024-11-09 17:17:39.815124] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:940422246844665101 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.205 [2024-11-09 17:17:39.815150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.205 [2024-11-09 17:17:39.815199] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.205 [2024-11-09 17:17:39.815215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.205 [2024-11-09 17:17:39.815267] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.205 [2024-11-09 17:17:39.815282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.205 [2024-11-09 17:17:39.815336] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:940422246894996749 len:3565 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.205 [2024-11-09 17:17:39.815350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.205 #25 NEW cov: 11874 ft: 13744 corp: 15/878b lim: 100 exec/s: 0 rss: 69Mb L: 93/96 MS: 1 CopyPart- 00:08:20.205 [2024-11-09 17:17:39.864990] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:940422246844665101 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.205 [2024-11-09 17:17:39.865017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.205 [2024-11-09 17:17:39.865076] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.205 [2024-11-09 17:17:39.865096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.205 #26 NEW cov: 11874 ft: 13768 corp: 16/935b lim: 100 exec/s: 0 rss: 69Mb L: 57/96 MS: 1 ChangeBinInt- 00:08:20.205 [2024-11-09 17:17:39.905369] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:940422246844665101 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.205 [2024-11-09 17:17:39.905396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.205 [2024-11-09 17:17:39.905451] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.205 [2024-11-09 17:17:39.905464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.205 [2024-11-09 17:17:39.905513] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.205 [2024-11-09 17:17:39.905534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.205 [2024-11-09 17:17:39.905586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:940422246895009805 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.205 [2024-11-09 17:17:39.905601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.205 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:20.205 #27 NEW cov: 11897 ft: 13835 corp: 17/1027b lim: 100 exec/s: 0 rss: 69Mb L: 92/96 MS: 1 CopyPart- 00:08:20.205 [2024-11-09 17:17:39.945216] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.205 [2024-11-09 17:17:39.945244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.205 [2024-11-09 17:17:39.945299] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.205 [2024-11-09 17:17:39.945316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.205 #28 NEW cov: 11897 ft: 13936 corp: 18/1085b lim: 100 exec/s: 0 rss: 69Mb L: 58/96 MS: 1 CopyPart- 00:08:20.465 [2024-11-09 17:17:39.985322] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:940422246844665101 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.465 [2024-11-09 17:17:39.985349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.465 [2024-11-09 17:17:39.985395] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.465 [2024-11-09 17:17:39.985410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.465 #29 NEW cov: 11897 ft: 13974 corp: 19/1142b lim: 100 exec/s: 0 rss: 69Mb L: 57/96 MS: 1 CopyPart- 00:08:20.465 [2024-11-09 17:17:40.025752] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8589934592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.465 [2024-11-09 17:17:40.025795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.465 [2024-11-09 17:17:40.025864] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1095216660480 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.465 [2024-11-09 17:17:40.025891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.465 [2024-11-09 17:17:40.025960] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:4294967295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.465 [2024-11-09 17:17:40.026000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.465 #30 NEW cov: 11897 ft: 14305 corp: 20/1208b lim: 100 exec/s: 30 rss: 69Mb L: 66/96 MS: 1 InsertRepeatedBytes- 00:08:20.465 [2024-11-09 17:17:40.075893] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:940422246844665101 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.465 [2024-11-09 17:17:40.075924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.465 [2024-11-09 17:17:40.075961] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.465 [2024-11-09 17:17:40.075976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.465 [2024-11-09 17:17:40.076027] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.465 [2024-11-09 17:17:40.076042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.465 [2024-11-09 17:17:40.076095] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:940422246894996749 len:3565 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.465 [2024-11-09 17:17:40.076111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.465 #31 NEW cov: 11897 ft: 14342 corp: 21/1301b lim: 100 exec/s: 31 rss: 69Mb L: 93/96 MS: 1 CrossOver- 00:08:20.465 [2024-11-09 17:17:40.125772] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:940422246844665101 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.465 [2024-11-09 17:17:40.125803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.465 [2024-11-09 17:17:40.125858] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.465 [2024-11-09 17:17:40.125873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.465 #32 NEW cov: 11897 ft: 14356 corp: 22/1359b lim: 100 exec/s: 32 rss: 69Mb L: 58/96 MS: 1 ShuffleBytes- 00:08:20.465 [2024-11-09 17:17:40.166135] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:10561665234359194258 len:37523 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.465 [2024-11-09 17:17:40.166163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.465 [2024-11-09 17:17:40.166222] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:10561665234359194258 len:37523 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.465 [2024-11-09 17:17:40.166237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.465 [2024-11-09 17:17:40.166292] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.465 [2024-11-09 17:17:40.166307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.465 [2024-11-09 17:17:40.166358] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:17461567301675709170 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.465 [2024-11-09 17:17:40.166372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.465 #33 NEW cov: 11897 ft: 14365 corp: 23/1456b lim: 100 exec/s: 33 rss: 69Mb L: 97/97 MS: 1 InsertByte- 00:08:20.465 [2024-11-09 17:17:40.206248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:940422246844665101 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.465 [2024-11-09 17:17:40.206276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.465 [2024-11-09 17:17:40.206340] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.465 [2024-11-09 17:17:40.206357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.465 [2024-11-09 17:17:40.206410] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.465 [2024-11-09 17:17:40.206425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.465 [2024-11-09 17:17:40.206479] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:940422246894800141 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.465 [2024-11-09 17:17:40.206495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.465 #34 NEW cov: 11897 ft: 14404 corp: 24/1550b lim: 100 exec/s: 34 rss: 69Mb L: 94/97 MS: 1 CrossOver- 00:08:20.725 [2024-11-09 17:17:40.246063] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:940422246844665101 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.725 [2024-11-09 17:17:40.246091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.725 [2024-11-09 17:17:40.246149] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.725 [2024-11-09 17:17:40.246165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.725 #35 NEW cov: 11897 ft: 14469 corp: 25/1608b lim: 100 exec/s: 35 rss: 69Mb L: 58/97 MS: 1 ChangeBinInt- 00:08:20.725 [2024-11-09 17:17:40.286507] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:940422246844665101 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.725 [2024-11-09 17:17:40.286541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.725 [2024-11-09 17:17:40.286605] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.725 [2024-11-09 17:17:40.286621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.725 [2024-11-09 17:17:40.286672] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.725 [2024-11-09 17:17:40.286687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.725 [2024-11-09 17:17:40.286749] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:940422246894800141 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.725 [2024-11-09 17:17:40.286764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.725 #36 NEW cov: 11897 ft: 14511 corp: 26/1703b lim: 100 exec/s: 36 rss: 70Mb L: 95/97 MS: 1 InsertByte- 00:08:20.725 [2024-11-09 17:17:40.336646] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:940422246844665101 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.725 [2024-11-09 17:17:40.336675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.725 [2024-11-09 17:17:40.336709] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.725 [2024-11-09 17:17:40.336724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.725 [2024-11-09 17:17:40.336776] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.725 [2024-11-09 17:17:40.336792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.725 [2024-11-09 17:17:40.336843] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:940422246894996749 len:3565 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.725 [2024-11-09 17:17:40.336858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.725 #37 NEW cov: 11897 ft: 14517 corp: 27/1797b lim: 100 exec/s: 37 rss: 70Mb L: 94/97 MS: 1 InsertByte- 00:08:20.725 [2024-11-09 17:17:40.376664] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.725 [2024-11-09 17:17:40.376691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.725 [2024-11-09 17:17:40.376728] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.725 [2024-11-09 17:17:40.376742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.725 [2024-11-09 17:17:40.376794] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.725 [2024-11-09 17:17:40.376809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.725 #38 NEW cov: 11897 ft: 14525 corp: 28/1863b lim: 100 exec/s: 38 rss: 70Mb L: 66/97 MS: 1 CopyPart- 00:08:20.725 [2024-11-09 17:17:40.426869] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:10561665234359194258 len:37523 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.725 [2024-11-09 17:17:40.426897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.725 [2024-11-09 17:17:40.426950] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:10561665234359194258 len:37523 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.725 [2024-11-09 17:17:40.426966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.725 [2024-11-09 17:17:40.427018] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.725 [2024-11-09 17:17:40.427033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.726 [2024-11-09 17:17:40.427085] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:17506321822972703474 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.726 [2024-11-09 17:17:40.427100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.726 #39 NEW cov: 11897 ft: 14552 corp: 29/1959b lim: 100 exec/s: 39 rss: 70Mb L: 96/97 MS: 1 ShuffleBytes- 00:08:20.726 [2024-11-09 17:17:40.466739] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:419430400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.726 [2024-11-09 17:17:40.466770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.726 [2024-11-09 17:17:40.466823] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.726 [2024-11-09 17:17:40.466839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.726 #40 NEW cov: 11897 ft: 14559 corp: 30/2012b lim: 100 exec/s: 40 rss: 70Mb L: 53/97 MS: 1 CMP- DE: "\031\000\000\000"- 00:08:20.986 [2024-11-09 17:17:40.506831] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:940422246844665101 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.986 [2024-11-09 17:17:40.506858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.986 [2024-11-09 17:17:40.506910] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.986 [2024-11-09 17:17:40.506926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.986 #41 NEW cov: 11897 ft: 14657 corp: 31/2069b lim: 100 exec/s: 41 rss: 70Mb L: 57/97 MS: 1 ChangeBit- 00:08:20.986 [2024-11-09 17:17:40.547272] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:10561665234359194258 len:37523 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.986 [2024-11-09 17:17:40.547299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.986 [2024-11-09 17:17:40.547348] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:10561665234359194258 len:37523 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.986 [2024-11-09 17:17:40.547363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.986 [2024-11-09 17:17:40.547416] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.986 [2024-11-09 17:17:40.547431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.986 [2024-11-09 17:17:40.547484] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:17506321822972703474 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.986 [2024-11-09 17:17:40.547499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.986 #42 NEW cov: 11897 ft: 14679 corp: 32/2165b lim: 100 exec/s: 42 rss: 70Mb L: 96/97 MS: 1 ChangeByte- 00:08:20.986 [2024-11-09 17:17:40.587351] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:10561665234359194258 len:37523 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.986 [2024-11-09 17:17:40.587378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.986 [2024-11-09 17:17:40.587427] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:10561665234359194258 len:37523 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.986 [2024-11-09 17:17:40.587442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.986 [2024-11-09 17:17:40.587494] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.986 [2024-11-09 17:17:40.587509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.986 [2024-11-09 17:17:40.587565] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:17461567301675709170 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.986 [2024-11-09 17:17:40.587583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.986 #43 NEW cov: 11897 ft: 14711 corp: 33/2262b lim: 100 exec/s: 43 rss: 70Mb L: 97/97 MS: 1 ChangeBit- 00:08:20.986 [2024-11-09 17:17:40.627203] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:940422246844665101 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.986 [2024-11-09 17:17:40.627230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.986 [2024-11-09 17:17:40.627283] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:940422457348394253 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.986 [2024-11-09 17:17:40.627299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.986 #44 NEW cov: 11897 ft: 14736 corp: 34/2319b lim: 100 exec/s: 44 rss: 70Mb L: 57/97 MS: 1 ChangeByte- 00:08:20.986 [2024-11-09 17:17:40.667626] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:940422246844665101 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.986 [2024-11-09 17:17:40.667653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.986 [2024-11-09 17:17:40.667706] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.986 [2024-11-09 17:17:40.667721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.986 [2024-11-09 17:17:40.667774] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.986 [2024-11-09 17:17:40.667789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.986 [2024-11-09 17:17:40.667843] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:940422246894800141 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.986 [2024-11-09 17:17:40.667858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.986 #45 NEW cov: 11897 ft: 14742 corp: 35/2413b lim: 100 exec/s: 45 rss: 70Mb L: 94/97 MS: 1 ChangeBit- 00:08:20.986 [2024-11-09 17:17:40.707718] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:10561665234359194258 len:37523 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.986 [2024-11-09 17:17:40.707746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.986 [2024-11-09 17:17:40.707799] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744071873663743 len:65427 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.986 [2024-11-09 17:17:40.707813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.986 [2024-11-09 17:17:40.707865] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.986 [2024-11-09 17:17:40.707880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.986 [2024-11-09 17:17:40.707932] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:17461567301675709170 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.986 [2024-11-09 17:17:40.707948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.986 #46 NEW cov: 11897 ft: 14767 corp: 36/2510b lim: 100 exec/s: 46 rss: 70Mb L: 97/97 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:08:20.986 [2024-11-09 17:17:40.747555] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:940413450751642893 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.986 [2024-11-09 17:17:40.747581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.986 [2024-11-09 17:17:40.747633] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.986 [2024-11-09 17:17:40.747650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.246 #47 NEW cov: 11897 ft: 14792 corp: 37/2567b lim: 100 exec/s: 47 rss: 70Mb L: 57/97 MS: 1 ChangeBit- 00:08:21.246 [2024-11-09 17:17:40.787758] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.246 [2024-11-09 17:17:40.787788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.246 [2024-11-09 17:17:40.787824] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.246 [2024-11-09 17:17:40.787841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.246 [2024-11-09 17:17:40.787895] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744069414584320 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.246 [2024-11-09 17:17:40.787911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.246 #48 NEW cov: 11897 ft: 14808 corp: 38/2641b lim: 100 exec/s: 48 rss: 70Mb L: 74/97 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:08:21.246 [2024-11-09 17:17:40.827770] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:940422250668100877 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.246 [2024-11-09 17:17:40.827797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.246 [2024-11-09 17:17:40.827863] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.246 [2024-11-09 17:17:40.827878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.246 #49 NEW cov: 11897 ft: 14814 corp: 39/2699b lim: 100 exec/s: 49 rss: 70Mb L: 58/97 MS: 1 ChangeBinInt- 00:08:21.246 [2024-11-09 17:17:40.867982] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:940422246844665101 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.246 [2024-11-09 17:17:40.868010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.246 [2024-11-09 17:17:40.868061] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4615359546570640653 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.246 [2024-11-09 17:17:40.868078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.246 [2024-11-09 17:17:40.868131] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17009265717352926477 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.246 [2024-11-09 17:17:40.868146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.246 #50 NEW cov: 11897 ft: 14824 corp: 40/2765b lim: 100 exec/s: 50 rss: 70Mb L: 66/97 MS: 1 EraseBytes- 00:08:21.246 [2024-11-09 17:17:40.907970] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:940413450751642893 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.246 [2024-11-09 17:17:40.908001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.246 [2024-11-09 17:17:40.908056] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.246 [2024-11-09 17:17:40.908071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.246 #51 NEW cov: 11897 ft: 14843 corp: 41/2822b lim: 100 exec/s: 51 rss: 70Mb L: 57/97 MS: 1 ChangeBinInt- 00:08:21.246 [2024-11-09 17:17:40.947940] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:940422246844665101 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.246 [2024-11-09 17:17:40.947968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.246 #52 NEW cov: 11897 ft: 15646 corp: 42/2854b lim: 100 exec/s: 52 rss: 70Mb L: 32/97 MS: 1 EraseBytes- 00:08:21.247 [2024-11-09 17:17:40.988180] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:940422246844665101 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.247 [2024-11-09 17:17:40.988208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.247 [2024-11-09 17:17:40.988261] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:940422457348394253 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.247 [2024-11-09 17:17:40.988277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.247 #53 NEW cov: 11897 ft: 15682 corp: 43/2911b lim: 100 exec/s: 53 rss: 70Mb L: 57/97 MS: 1 CMP- DE: "\021\000"- 00:08:21.506 [2024-11-09 17:17:41.028493] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:940422246844665101 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.506 [2024-11-09 17:17:41.028520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.506 [2024-11-09 17:17:41.028563] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:940422246894996749 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.506 [2024-11-09 17:17:41.028579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.506 [2024-11-09 17:17:41.028633] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:937030197688732941 len:3342 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.506 [2024-11-09 17:17:41.028648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.506 #54 NEW cov: 11897 ft: 15698 corp: 44/2972b lim: 100 exec/s: 27 rss: 70Mb L: 61/97 MS: 1 CMP- DE: "\001\000\000\000"- 00:08:21.506 #54 DONE cov: 11897 ft: 15698 corp: 44/2972b lim: 100 exec/s: 27 rss: 70Mb 00:08:21.506 ###### Recommended dictionary. ###### 00:08:21.506 "\031\000\000\000" # Uses: 0 00:08:21.506 "\377\377\377\377\377\377\377\377" # Uses: 1 00:08:21.506 "\021\000" # Uses: 0 00:08:21.506 "\001\000\000\000" # Uses: 0 00:08:21.506 ###### End of recommended dictionary. ###### 00:08:21.506 Done 54 runs in 2 second(s) 00:08:21.506 17:17:41 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_24.conf 00:08:21.506 17:17:41 -- ../common.sh@72 -- # (( i++ )) 00:08:21.506 17:17:41 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:21.506 17:17:41 -- nvmf/run.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:08:21.506 00:08:21.506 real 1m4.085s 00:08:21.506 user 1m40.598s 00:08:21.506 sys 0m7.158s 00:08:21.506 17:17:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:21.506 17:17:41 -- common/autotest_common.sh@10 -- # set +x 00:08:21.506 ************************************ 00:08:21.506 END TEST nvmf_fuzz 00:08:21.506 ************************************ 00:08:21.506 17:17:41 -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:08:21.506 17:17:41 -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:08:21.506 17:17:41 -- fuzz/llvm.sh@20 -- # run_test vfio_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:21.506 17:17:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:21.506 17:17:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:21.506 17:17:41 -- common/autotest_common.sh@10 -- # set +x 00:08:21.506 ************************************ 00:08:21.506 START TEST vfio_fuzz 00:08:21.506 ************************************ 00:08:21.506 17:17:41 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:21.768 * Looking for test storage... 00:08:21.768 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:21.768 17:17:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:21.768 17:17:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:21.768 17:17:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:21.768 17:17:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:21.768 17:17:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:21.768 17:17:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:21.768 17:17:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:21.768 17:17:41 -- scripts/common.sh@335 -- # IFS=.-: 00:08:21.768 17:17:41 -- scripts/common.sh@335 -- # read -ra ver1 00:08:21.768 17:17:41 -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.768 17:17:41 -- scripts/common.sh@336 -- # read -ra ver2 00:08:21.768 17:17:41 -- scripts/common.sh@337 -- # local 'op=<' 00:08:21.768 17:17:41 -- scripts/common.sh@339 -- # ver1_l=2 00:08:21.768 17:17:41 -- scripts/common.sh@340 -- # ver2_l=1 00:08:21.768 17:17:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:21.768 17:17:41 -- scripts/common.sh@343 -- # case "$op" in 00:08:21.768 17:17:41 -- scripts/common.sh@344 -- # : 1 00:08:21.768 17:17:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:21.768 17:17:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.768 17:17:41 -- scripts/common.sh@364 -- # decimal 1 00:08:21.768 17:17:41 -- scripts/common.sh@352 -- # local d=1 00:08:21.768 17:17:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.768 17:17:41 -- scripts/common.sh@354 -- # echo 1 00:08:21.768 17:17:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:21.768 17:17:41 -- scripts/common.sh@365 -- # decimal 2 00:08:21.768 17:17:41 -- scripts/common.sh@352 -- # local d=2 00:08:21.768 17:17:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.768 17:17:41 -- scripts/common.sh@354 -- # echo 2 00:08:21.768 17:17:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:21.768 17:17:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:21.768 17:17:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:21.768 17:17:41 -- scripts/common.sh@367 -- # return 0 00:08:21.768 17:17:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.768 17:17:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:21.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.768 --rc genhtml_branch_coverage=1 00:08:21.768 --rc genhtml_function_coverage=1 00:08:21.768 --rc genhtml_legend=1 00:08:21.768 --rc geninfo_all_blocks=1 00:08:21.768 --rc geninfo_unexecuted_blocks=1 00:08:21.768 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:21.768 ' 00:08:21.768 17:17:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:21.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.768 --rc genhtml_branch_coverage=1 00:08:21.768 --rc genhtml_function_coverage=1 00:08:21.768 --rc genhtml_legend=1 00:08:21.768 --rc geninfo_all_blocks=1 00:08:21.768 --rc geninfo_unexecuted_blocks=1 00:08:21.768 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:21.768 ' 00:08:21.768 17:17:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:21.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.768 --rc genhtml_branch_coverage=1 00:08:21.768 --rc genhtml_function_coverage=1 00:08:21.768 --rc genhtml_legend=1 00:08:21.768 --rc geninfo_all_blocks=1 00:08:21.768 --rc geninfo_unexecuted_blocks=1 00:08:21.768 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:21.768 ' 00:08:21.768 17:17:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:21.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.768 --rc genhtml_branch_coverage=1 00:08:21.768 --rc genhtml_function_coverage=1 00:08:21.768 --rc genhtml_legend=1 00:08:21.768 --rc geninfo_all_blocks=1 00:08:21.768 --rc geninfo_unexecuted_blocks=1 00:08:21.768 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:21.768 ' 00:08:21.768 17:17:41 -- vfio/run.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:08:21.768 17:17:41 -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:08:21.768 17:17:41 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:21.768 17:17:41 -- common/autotest_common.sh@34 -- # set -e 00:08:21.768 17:17:41 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:21.768 17:17:41 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:21.768 17:17:41 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:21.768 17:17:41 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:08:21.768 17:17:41 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:21.768 17:17:41 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:21.768 17:17:41 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:21.768 17:17:41 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:21.768 17:17:41 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:21.768 17:17:41 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:21.768 17:17:41 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:21.768 17:17:41 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:21.768 17:17:41 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:21.768 17:17:41 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:21.768 17:17:41 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:21.768 17:17:41 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:21.768 17:17:41 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:21.768 17:17:41 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:21.768 17:17:41 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:21.768 17:17:41 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:21.768 17:17:41 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:21.768 17:17:41 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:21.768 17:17:41 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:21.768 17:17:41 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:21.768 17:17:41 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:21.768 17:17:41 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:21.768 17:17:41 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:21.768 17:17:41 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:21.768 17:17:41 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:21.768 17:17:41 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:21.768 17:17:41 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:21.768 17:17:41 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:21.768 17:17:41 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:21.768 17:17:41 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:21.769 17:17:41 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:21.769 17:17:41 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:21.769 17:17:41 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:21.769 17:17:41 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:08:21.769 17:17:41 -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:08:21.769 17:17:41 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:21.769 17:17:41 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:21.769 17:17:41 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:21.769 17:17:41 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:21.769 17:17:41 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:21.769 17:17:41 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:21.769 17:17:41 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:21.769 17:17:41 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:21.769 17:17:41 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:21.769 17:17:41 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:21.769 17:17:41 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:21.769 17:17:41 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:21.769 17:17:41 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:21.769 17:17:41 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:21.769 17:17:41 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:21.769 17:17:41 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:08:21.769 17:17:41 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:21.769 17:17:41 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:21.769 17:17:41 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:21.769 17:17:41 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:21.769 17:17:41 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:21.769 17:17:41 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:21.769 17:17:41 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:08:21.769 17:17:41 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:21.769 17:17:41 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:21.769 17:17:41 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:08:21.769 17:17:41 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:21.769 17:17:41 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:21.769 17:17:41 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:08:21.769 17:17:41 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:21.769 17:17:41 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:21.769 17:17:41 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:21.769 17:17:41 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:08:21.769 17:17:41 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:21.769 17:17:41 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:21.769 17:17:41 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:21.769 17:17:41 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:21.769 17:17:41 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:21.769 17:17:41 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:21.769 17:17:41 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:21.769 17:17:41 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:21.769 17:17:41 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:21.769 17:17:41 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:21.769 17:17:41 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:21.769 17:17:41 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:21.769 17:17:41 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:21.769 17:17:41 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:21.769 17:17:41 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:21.769 17:17:41 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:21.769 17:17:41 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:21.769 17:17:41 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:21.769 17:17:41 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:21.769 17:17:41 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:21.769 17:17:41 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:21.769 17:17:41 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:21.769 17:17:41 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:21.769 17:17:41 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:21.769 17:17:41 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:21.769 17:17:41 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:08:21.769 17:17:41 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:21.769 #define SPDK_CONFIG_H 00:08:21.769 #define SPDK_CONFIG_APPS 1 00:08:21.769 #define SPDK_CONFIG_ARCH native 00:08:21.769 #undef SPDK_CONFIG_ASAN 00:08:21.769 #undef SPDK_CONFIG_AVAHI 00:08:21.769 #undef SPDK_CONFIG_CET 00:08:21.769 #define SPDK_CONFIG_COVERAGE 1 00:08:21.769 #define SPDK_CONFIG_CROSS_PREFIX 00:08:21.769 #undef SPDK_CONFIG_CRYPTO 00:08:21.769 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:21.769 #undef SPDK_CONFIG_CUSTOMOCF 00:08:21.769 #undef SPDK_CONFIG_DAOS 00:08:21.769 #define SPDK_CONFIG_DAOS_DIR 00:08:21.769 #define SPDK_CONFIG_DEBUG 1 00:08:21.769 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:21.769 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:21.769 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:21.769 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:21.769 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:21.769 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:21.769 #define SPDK_CONFIG_EXAMPLES 1 00:08:21.769 #undef SPDK_CONFIG_FC 00:08:21.769 #define SPDK_CONFIG_FC_PATH 00:08:21.769 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:21.769 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:21.769 #undef SPDK_CONFIG_FUSE 00:08:21.769 #define SPDK_CONFIG_FUZZER 1 00:08:21.769 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:08:21.769 #undef SPDK_CONFIG_GOLANG 00:08:21.769 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:21.769 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:21.769 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:21.769 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:21.769 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:21.769 #define SPDK_CONFIG_IDXD 1 00:08:21.769 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:21.769 #undef SPDK_CONFIG_IPSEC_MB 00:08:21.769 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:21.769 #define SPDK_CONFIG_ISAL 1 00:08:21.769 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:21.769 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:21.769 #define SPDK_CONFIG_LIBDIR 00:08:21.769 #undef SPDK_CONFIG_LTO 00:08:21.769 #define SPDK_CONFIG_MAX_LCORES 00:08:21.769 #define SPDK_CONFIG_NVME_CUSE 1 00:08:21.769 #undef SPDK_CONFIG_OCF 00:08:21.769 #define SPDK_CONFIG_OCF_PATH 00:08:21.769 #define SPDK_CONFIG_OPENSSL_PATH 00:08:21.769 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:21.769 #undef SPDK_CONFIG_PGO_USE 00:08:21.769 #define SPDK_CONFIG_PREFIX /usr/local 00:08:21.769 #undef SPDK_CONFIG_RAID5F 00:08:21.769 #undef SPDK_CONFIG_RBD 00:08:21.769 #define SPDK_CONFIG_RDMA 1 00:08:21.769 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:21.769 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:21.769 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:21.769 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:21.769 #undef SPDK_CONFIG_SHARED 00:08:21.769 #undef SPDK_CONFIG_SMA 00:08:21.769 #define SPDK_CONFIG_TESTS 1 00:08:21.769 #undef SPDK_CONFIG_TSAN 00:08:21.769 #define SPDK_CONFIG_UBLK 1 00:08:21.769 #define SPDK_CONFIG_UBSAN 1 00:08:21.769 #undef SPDK_CONFIG_UNIT_TESTS 00:08:21.769 #undef SPDK_CONFIG_URING 00:08:21.769 #define SPDK_CONFIG_URING_PATH 00:08:21.769 #undef SPDK_CONFIG_URING_ZNS 00:08:21.769 #undef SPDK_CONFIG_USDT 00:08:21.769 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:21.769 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:21.769 #define SPDK_CONFIG_VFIO_USER 1 00:08:21.769 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:21.769 #define SPDK_CONFIG_VHOST 1 00:08:21.769 #define SPDK_CONFIG_VIRTIO 1 00:08:21.769 #undef SPDK_CONFIG_VTUNE 00:08:21.769 #define SPDK_CONFIG_VTUNE_DIR 00:08:21.769 #define SPDK_CONFIG_WERROR 1 00:08:21.769 #define SPDK_CONFIG_WPDK_DIR 00:08:21.769 #undef SPDK_CONFIG_XNVME 00:08:21.769 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:21.769 17:17:41 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:21.769 17:17:41 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:21.769 17:17:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.769 17:17:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.769 17:17:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.769 17:17:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.769 17:17:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.769 17:17:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.769 17:17:41 -- paths/export.sh@5 -- # export PATH 00:08:21.770 17:17:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.770 17:17:41 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:21.770 17:17:41 -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:21.770 17:17:41 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:21.770 17:17:41 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:21.770 17:17:41 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:21.770 17:17:41 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:21.770 17:17:41 -- pm/common@16 -- # TEST_TAG=N/A 00:08:21.770 17:17:41 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:08:21.770 17:17:41 -- common/autotest_common.sh@52 -- # : 1 00:08:21.770 17:17:41 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:21.770 17:17:41 -- common/autotest_common.sh@56 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:21.770 17:17:41 -- common/autotest_common.sh@58 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:21.770 17:17:41 -- common/autotest_common.sh@60 -- # : 1 00:08:21.770 17:17:41 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:21.770 17:17:41 -- common/autotest_common.sh@62 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:21.770 17:17:41 -- common/autotest_common.sh@64 -- # : 00:08:21.770 17:17:41 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:21.770 17:17:41 -- common/autotest_common.sh@66 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:21.770 17:17:41 -- common/autotest_common.sh@68 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:21.770 17:17:41 -- common/autotest_common.sh@70 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:21.770 17:17:41 -- common/autotest_common.sh@72 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:21.770 17:17:41 -- common/autotest_common.sh@74 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:21.770 17:17:41 -- common/autotest_common.sh@76 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:21.770 17:17:41 -- common/autotest_common.sh@78 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:21.770 17:17:41 -- common/autotest_common.sh@80 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:21.770 17:17:41 -- common/autotest_common.sh@82 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:21.770 17:17:41 -- common/autotest_common.sh@84 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:21.770 17:17:41 -- common/autotest_common.sh@86 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:21.770 17:17:41 -- common/autotest_common.sh@88 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:21.770 17:17:41 -- common/autotest_common.sh@90 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:21.770 17:17:41 -- common/autotest_common.sh@92 -- # : 1 00:08:21.770 17:17:41 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:21.770 17:17:41 -- common/autotest_common.sh@94 -- # : 1 00:08:21.770 17:17:41 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:21.770 17:17:41 -- common/autotest_common.sh@96 -- # : rdma 00:08:21.770 17:17:41 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:21.770 17:17:41 -- common/autotest_common.sh@98 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:21.770 17:17:41 -- common/autotest_common.sh@100 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:21.770 17:17:41 -- common/autotest_common.sh@102 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:21.770 17:17:41 -- common/autotest_common.sh@104 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:21.770 17:17:41 -- common/autotest_common.sh@106 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:21.770 17:17:41 -- common/autotest_common.sh@108 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:21.770 17:17:41 -- common/autotest_common.sh@110 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:21.770 17:17:41 -- common/autotest_common.sh@112 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:21.770 17:17:41 -- common/autotest_common.sh@114 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:21.770 17:17:41 -- common/autotest_common.sh@116 -- # : 1 00:08:21.770 17:17:41 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:21.770 17:17:41 -- common/autotest_common.sh@118 -- # : 00:08:21.770 17:17:41 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:21.770 17:17:41 -- common/autotest_common.sh@120 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:21.770 17:17:41 -- common/autotest_common.sh@122 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:21.770 17:17:41 -- common/autotest_common.sh@124 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:21.770 17:17:41 -- common/autotest_common.sh@126 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:21.770 17:17:41 -- common/autotest_common.sh@128 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:21.770 17:17:41 -- common/autotest_common.sh@130 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:21.770 17:17:41 -- common/autotest_common.sh@132 -- # : 00:08:21.770 17:17:41 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:21.770 17:17:41 -- common/autotest_common.sh@134 -- # : true 00:08:21.770 17:17:41 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:21.770 17:17:41 -- common/autotest_common.sh@136 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:21.770 17:17:41 -- common/autotest_common.sh@138 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:21.770 17:17:41 -- common/autotest_common.sh@140 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:21.770 17:17:41 -- common/autotest_common.sh@142 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:21.770 17:17:41 -- common/autotest_common.sh@144 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:21.770 17:17:41 -- common/autotest_common.sh@146 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:21.770 17:17:41 -- common/autotest_common.sh@148 -- # : 00:08:21.770 17:17:41 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:21.770 17:17:41 -- common/autotest_common.sh@150 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:21.770 17:17:41 -- common/autotest_common.sh@152 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:21.770 17:17:41 -- common/autotest_common.sh@154 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:21.770 17:17:41 -- common/autotest_common.sh@156 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:21.770 17:17:41 -- common/autotest_common.sh@158 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:21.770 17:17:41 -- common/autotest_common.sh@160 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:21.770 17:17:41 -- common/autotest_common.sh@163 -- # : 00:08:21.770 17:17:41 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:21.770 17:17:41 -- common/autotest_common.sh@165 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:21.770 17:17:41 -- common/autotest_common.sh@167 -- # : 0 00:08:21.770 17:17:41 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:21.770 17:17:41 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:21.770 17:17:41 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:21.770 17:17:41 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:21.770 17:17:41 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:21.770 17:17:41 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:21.770 17:17:41 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:21.770 17:17:41 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:21.771 17:17:41 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:21.771 17:17:41 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:21.771 17:17:41 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:21.771 17:17:41 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:21.771 17:17:41 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:21.771 17:17:41 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:21.771 17:17:41 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:21.771 17:17:41 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:21.771 17:17:41 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:21.771 17:17:41 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:21.771 17:17:41 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:21.771 17:17:41 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:21.771 17:17:41 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:21.771 17:17:41 -- common/autotest_common.sh@196 -- # cat 00:08:21.771 17:17:41 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:21.771 17:17:41 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:21.771 17:17:41 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:21.771 17:17:41 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:21.771 17:17:41 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:21.771 17:17:41 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:21.771 17:17:41 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:21.771 17:17:41 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:21.771 17:17:41 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:21.771 17:17:41 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:21.771 17:17:41 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:21.771 17:17:41 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:21.771 17:17:41 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:21.771 17:17:41 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:21.771 17:17:41 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:21.771 17:17:41 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:21.771 17:17:41 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:21.771 17:17:41 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:21.771 17:17:41 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:21.771 17:17:41 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:08:21.771 17:17:41 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:08:21.771 17:17:41 -- common/autotest_common.sh@249 -- # _LCOV= 00:08:21.771 17:17:41 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:08:21.771 17:17:41 -- common/autotest_common.sh@250 -- # [[ 1 -eq 1 ]] 00:08:21.771 17:17:41 -- common/autotest_common.sh@250 -- # _LCOV=1 00:08:21.771 17:17:41 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:21.771 17:17:41 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:08:21.771 17:17:41 -- common/autotest_common.sh@255 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:21.771 17:17:41 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:08:21.771 17:17:41 -- common/autotest_common.sh@259 -- # export valgrind= 00:08:21.771 17:17:41 -- common/autotest_common.sh@259 -- # valgrind= 00:08:21.771 17:17:41 -- common/autotest_common.sh@265 -- # uname -s 00:08:21.771 17:17:41 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:08:21.771 17:17:41 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:08:21.771 17:17:41 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:08:21.771 17:17:41 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:08:21.771 17:17:41 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:21.771 17:17:41 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:21.771 17:17:41 -- common/autotest_common.sh@275 -- # MAKE=make 00:08:21.771 17:17:41 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j112 00:08:21.771 17:17:41 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:08:21.771 17:17:41 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:08:21.771 17:17:41 -- common/autotest_common.sh@294 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:08:21.771 17:17:41 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:08:21.771 17:17:41 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:08:21.771 17:17:41 -- common/autotest_common.sh@319 -- # [[ -z 377090 ]] 00:08:21.771 17:17:41 -- common/autotest_common.sh@319 -- # kill -0 377090 00:08:21.771 17:17:41 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:08:21.771 17:17:41 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:08:21.771 17:17:41 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:08:21.771 17:17:41 -- common/autotest_common.sh@332 -- # local mount target_dir 00:08:21.771 17:17:41 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:08:21.771 17:17:41 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:08:21.771 17:17:41 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:08:21.771 17:17:41 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:08:21.771 17:17:41 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.N2uikK 00:08:21.771 17:17:41 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:21.771 17:17:41 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:08:21.771 17:17:41 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:08:21.771 17:17:41 -- common/autotest_common.sh@356 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.N2uikK/tests/vfio /tmp/spdk.N2uikK 00:08:21.771 17:17:41 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:08:22.031 17:17:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:22.031 17:17:41 -- common/autotest_common.sh@328 -- # df -T 00:08:22.031 17:17:41 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:08:22.031 17:17:41 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_devtmpfs 00:08:22.031 17:17:41 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:08:22.031 17:17:41 -- common/autotest_common.sh@363 -- # avails["$mount"]=67108864 00:08:22.031 17:17:41 -- common/autotest_common.sh@363 -- # sizes["$mount"]=67108864 00:08:22.031 17:17:41 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:08:22.031 17:17:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:22.031 17:17:41 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/pmem0 00:08:22.031 17:17:41 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext2 00:08:22.031 17:17:41 -- common/autotest_common.sh@363 -- # avails["$mount"]=4096 00:08:22.031 17:17:41 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5284429824 00:08:22.031 17:17:41 -- common/autotest_common.sh@364 -- # uses["$mount"]=5284425728 00:08:22.031 17:17:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:22.031 17:17:41 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_root 00:08:22.031 17:17:41 -- common/autotest_common.sh@362 -- # fss["$mount"]=overlay 00:08:22.031 17:17:41 -- common/autotest_common.sh@363 -- # avails["$mount"]=53659770880 00:08:22.031 17:17:41 -- common/autotest_common.sh@363 -- # sizes["$mount"]=61730578432 00:08:22.031 17:17:41 -- common/autotest_common.sh@364 -- # uses["$mount"]=8070807552 00:08:22.031 17:17:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:22.031 17:17:41 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:22.031 17:17:41 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:22.031 17:17:41 -- common/autotest_common.sh@363 -- # avails["$mount"]=30862696448 00:08:22.031 17:17:41 -- common/autotest_common.sh@363 -- # sizes["$mount"]=30865289216 00:08:22.031 17:17:41 -- common/autotest_common.sh@364 -- # uses["$mount"]=2592768 00:08:22.031 17:17:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:22.031 17:17:41 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:22.031 17:17:41 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:22.031 17:17:41 -- common/autotest_common.sh@363 -- # avails["$mount"]=12340125696 00:08:22.031 17:17:41 -- common/autotest_common.sh@363 -- # sizes["$mount"]=12346118144 00:08:22.031 17:17:41 -- common/autotest_common.sh@364 -- # uses["$mount"]=5992448 00:08:22.031 17:17:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:22.031 17:17:41 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:22.031 17:17:41 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:22.031 17:17:41 -- common/autotest_common.sh@363 -- # avails["$mount"]=30864220160 00:08:22.031 17:17:41 -- common/autotest_common.sh@363 -- # sizes["$mount"]=30865289216 00:08:22.031 17:17:41 -- common/autotest_common.sh@364 -- # uses["$mount"]=1069056 00:08:22.031 17:17:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:22.031 17:17:41 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:22.031 17:17:41 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:22.031 17:17:41 -- common/autotest_common.sh@363 -- # avails["$mount"]=6173044736 00:08:22.031 17:17:41 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6173057024 00:08:22.031 17:17:41 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:08:22.031 17:17:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:22.031 17:17:41 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:08:22.031 * Looking for test storage... 00:08:22.031 17:17:41 -- common/autotest_common.sh@369 -- # local target_space new_size 00:08:22.031 17:17:41 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:08:22.031 17:17:41 -- common/autotest_common.sh@373 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:22.032 17:17:41 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:22.032 17:17:41 -- common/autotest_common.sh@373 -- # mount=/ 00:08:22.032 17:17:41 -- common/autotest_common.sh@375 -- # target_space=53659770880 00:08:22.032 17:17:41 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:08:22.032 17:17:41 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:08:22.032 17:17:41 -- common/autotest_common.sh@381 -- # [[ overlay == tmpfs ]] 00:08:22.032 17:17:41 -- common/autotest_common.sh@381 -- # [[ overlay == ramfs ]] 00:08:22.032 17:17:41 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:08:22.032 17:17:41 -- common/autotest_common.sh@382 -- # new_size=10285400064 00:08:22.032 17:17:41 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:22.032 17:17:41 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:22.032 17:17:41 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:22.032 17:17:41 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:22.032 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:22.032 17:17:41 -- common/autotest_common.sh@390 -- # return 0 00:08:22.032 17:17:41 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:08:22.032 17:17:41 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:08:22.032 17:17:41 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:22.032 17:17:41 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:22.032 17:17:41 -- common/autotest_common.sh@1682 -- # true 00:08:22.032 17:17:41 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:08:22.032 17:17:41 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:22.032 17:17:41 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:22.032 17:17:41 -- common/autotest_common.sh@27 -- # exec 00:08:22.032 17:17:41 -- common/autotest_common.sh@29 -- # exec 00:08:22.032 17:17:41 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:22.032 17:17:41 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:22.032 17:17:41 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:22.032 17:17:41 -- common/autotest_common.sh@18 -- # set -x 00:08:22.032 17:17:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:22.032 17:17:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:22.032 17:17:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:22.032 17:17:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:22.032 17:17:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:22.032 17:17:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:22.032 17:17:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:22.032 17:17:41 -- scripts/common.sh@335 -- # IFS=.-: 00:08:22.032 17:17:41 -- scripts/common.sh@335 -- # read -ra ver1 00:08:22.032 17:17:41 -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.032 17:17:41 -- scripts/common.sh@336 -- # read -ra ver2 00:08:22.032 17:17:41 -- scripts/common.sh@337 -- # local 'op=<' 00:08:22.032 17:17:41 -- scripts/common.sh@339 -- # ver1_l=2 00:08:22.032 17:17:41 -- scripts/common.sh@340 -- # ver2_l=1 00:08:22.032 17:17:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:22.032 17:17:41 -- scripts/common.sh@343 -- # case "$op" in 00:08:22.032 17:17:41 -- scripts/common.sh@344 -- # : 1 00:08:22.032 17:17:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:22.032 17:17:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.032 17:17:41 -- scripts/common.sh@364 -- # decimal 1 00:08:22.032 17:17:41 -- scripts/common.sh@352 -- # local d=1 00:08:22.032 17:17:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.032 17:17:41 -- scripts/common.sh@354 -- # echo 1 00:08:22.032 17:17:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:22.032 17:17:41 -- scripts/common.sh@365 -- # decimal 2 00:08:22.032 17:17:41 -- scripts/common.sh@352 -- # local d=2 00:08:22.032 17:17:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.032 17:17:41 -- scripts/common.sh@354 -- # echo 2 00:08:22.032 17:17:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:22.032 17:17:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:22.032 17:17:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:22.032 17:17:41 -- scripts/common.sh@367 -- # return 0 00:08:22.032 17:17:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.032 17:17:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:22.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.032 --rc genhtml_branch_coverage=1 00:08:22.032 --rc genhtml_function_coverage=1 00:08:22.032 --rc genhtml_legend=1 00:08:22.032 --rc geninfo_all_blocks=1 00:08:22.032 --rc geninfo_unexecuted_blocks=1 00:08:22.032 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:22.032 ' 00:08:22.032 17:17:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:22.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.032 --rc genhtml_branch_coverage=1 00:08:22.032 --rc genhtml_function_coverage=1 00:08:22.032 --rc genhtml_legend=1 00:08:22.032 --rc geninfo_all_blocks=1 00:08:22.032 --rc geninfo_unexecuted_blocks=1 00:08:22.032 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:22.032 ' 00:08:22.032 17:17:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:22.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.032 --rc genhtml_branch_coverage=1 00:08:22.032 --rc genhtml_function_coverage=1 00:08:22.032 --rc genhtml_legend=1 00:08:22.032 --rc geninfo_all_blocks=1 00:08:22.032 --rc geninfo_unexecuted_blocks=1 00:08:22.032 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:22.032 ' 00:08:22.032 17:17:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:22.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.032 --rc genhtml_branch_coverage=1 00:08:22.032 --rc genhtml_function_coverage=1 00:08:22.032 --rc genhtml_legend=1 00:08:22.032 --rc geninfo_all_blocks=1 00:08:22.032 --rc geninfo_unexecuted_blocks=1 00:08:22.032 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:22.032 ' 00:08:22.032 17:17:41 -- vfio/run.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:08:22.032 17:17:41 -- ../common.sh@8 -- # pids=() 00:08:22.032 17:17:41 -- vfio/run.sh@58 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:22.032 17:17:41 -- vfio/run.sh@59 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:22.032 17:17:41 -- vfio/run.sh@59 -- # fuzz_num=7 00:08:22.032 17:17:41 -- vfio/run.sh@60 -- # (( fuzz_num != 0 )) 00:08:22.032 17:17:41 -- vfio/run.sh@62 -- # trap 'cleanup /tmp/vfio-user-*; exit 1' SIGINT SIGTERM EXIT 00:08:22.032 17:17:41 -- vfio/run.sh@65 -- # mem_size=0 00:08:22.032 17:17:41 -- vfio/run.sh@66 -- # [[ 1 -eq 1 ]] 00:08:22.032 17:17:41 -- vfio/run.sh@67 -- # start_llvm_fuzz_short 7 1 00:08:22.032 17:17:41 -- ../common.sh@69 -- # local fuzz_num=7 00:08:22.032 17:17:41 -- ../common.sh@70 -- # local time=1 00:08:22.032 17:17:41 -- ../common.sh@72 -- # (( i = 0 )) 00:08:22.032 17:17:41 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:22.032 17:17:41 -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:08:22.032 17:17:41 -- vfio/run.sh@22 -- # local fuzzer_type=0 00:08:22.032 17:17:41 -- vfio/run.sh@23 -- # local timen=1 00:08:22.032 17:17:41 -- vfio/run.sh@24 -- # local core=0x1 00:08:22.032 17:17:41 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:22.032 17:17:41 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:08:22.032 17:17:41 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:08:22.032 17:17:41 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:08:22.032 17:17:41 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:08:22.032 17:17:41 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:22.032 17:17:41 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:08:22.032 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:22.032 17:17:41 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:08:22.032 [2024-11-09 17:17:41.714826] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:22.032 [2024-11-09 17:17:41.714921] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid377197 ] 00:08:22.032 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.032 [2024-11-09 17:17:41.790566] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.292 [2024-11-09 17:17:41.866770] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:22.292 [2024-11-09 17:17:41.866915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.292 INFO: Running with entropic power schedule (0xFF, 100). 00:08:22.292 INFO: Seed: 4255451086 00:08:22.551 INFO: Loaded 1 modules (341891 inline 8-bit counters): 341891 [0x27db80c, 0x282ef8f), 00:08:22.551 INFO: Loaded 1 PC tables (341891 PCs): 341891 [0x282ef90,0x2d667c0), 00:08:22.551 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:22.551 INFO: A corpus is not provided, starting from an empty corpus 00:08:22.551 #2 INITED exec/s: 0 rss: 62Mb 00:08:22.551 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:22.551 This may also happen if the target rejected all inputs we tried so far 00:08:22.809 NEW_FUNC[1/629]: 0x43a218 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:85 00:08:23.068 NEW_FUNC[2/629]: 0x43fdb8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:23.068 #5 NEW cov: 10752 ft: 10728 corp: 2/58b lim: 60 exec/s: 0 rss: 68Mb L: 57/57 MS: 3 InsertByte-ChangeByte-InsertRepeatedBytes- 00:08:23.068 NEW_FUNC[1/2]: 0x135b738 in spdk_nvme_opc_get_data_transfer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/nvme_spec.h:1728 00:08:23.068 NEW_FUNC[2/2]: 0x16394f8 in nvme_payload_type /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:260 00:08:23.068 #6 NEW cov: 10773 ft: 14542 corp: 3/115b lim: 60 exec/s: 0 rss: 69Mb L: 57/57 MS: 1 CMP- DE: "\377\377\377\017"- 00:08:23.327 NEW_FUNC[1/1]: 0x191add8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:23.327 #7 NEW cov: 10796 ft: 15619 corp: 4/172b lim: 60 exec/s: 0 rss: 70Mb L: 57/57 MS: 1 CrossOver- 00:08:23.586 #13 NEW cov: 10796 ft: 15905 corp: 5/218b lim: 60 exec/s: 13 rss: 70Mb L: 46/57 MS: 1 EraseBytes- 00:08:23.844 #14 NEW cov: 10796 ft: 16730 corp: 6/273b lim: 60 exec/s: 14 rss: 70Mb L: 55/57 MS: 1 EraseBytes- 00:08:23.844 #15 NEW cov: 10796 ft: 17089 corp: 7/330b lim: 60 exec/s: 15 rss: 70Mb L: 57/57 MS: 1 ChangeBit- 00:08:24.103 #16 NEW cov: 10796 ft: 17408 corp: 8/387b lim: 60 exec/s: 16 rss: 70Mb L: 57/57 MS: 1 ChangeByte- 00:08:24.361 #17 NEW cov: 10803 ft: 17654 corp: 9/444b lim: 60 exec/s: 17 rss: 70Mb L: 57/57 MS: 1 ChangeBinInt- 00:08:24.620 #18 NEW cov: 10803 ft: 17965 corp: 10/501b lim: 60 exec/s: 9 rss: 70Mb L: 57/57 MS: 1 ChangeByte- 00:08:24.620 #18 DONE cov: 10803 ft: 17965 corp: 10/501b lim: 60 exec/s: 9 rss: 70Mb 00:08:24.620 ###### Recommended dictionary. ###### 00:08:24.620 "\377\377\377\017" # Uses: 0 00:08:24.620 ###### End of recommended dictionary. ###### 00:08:24.620 Done 18 runs in 2 second(s) 00:08:24.879 17:17:44 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-0 00:08:24.879 17:17:44 -- ../common.sh@72 -- # (( i++ )) 00:08:24.879 17:17:44 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:24.879 17:17:44 -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:08:24.879 17:17:44 -- vfio/run.sh@22 -- # local fuzzer_type=1 00:08:24.879 17:17:44 -- vfio/run.sh@23 -- # local timen=1 00:08:24.879 17:17:44 -- vfio/run.sh@24 -- # local core=0x1 00:08:24.879 17:17:44 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:24.879 17:17:44 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:08:24.879 17:17:44 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:08:24.879 17:17:44 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:08:24.879 17:17:44 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:08:24.879 17:17:44 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:24.879 17:17:44 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:08:24.879 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:24.879 17:17:44 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:08:24.879 [2024-11-09 17:17:44.444490] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:24.879 [2024-11-09 17:17:44.444569] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid377713 ] 00:08:24.879 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.879 [2024-11-09 17:17:44.518670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.879 [2024-11-09 17:17:44.586437] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:24.879 [2024-11-09 17:17:44.586605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.138 INFO: Running with entropic power schedule (0xFF, 100). 00:08:25.138 INFO: Seed: 2682486502 00:08:25.138 INFO: Loaded 1 modules (341891 inline 8-bit counters): 341891 [0x27db80c, 0x282ef8f), 00:08:25.138 INFO: Loaded 1 PC tables (341891 PCs): 341891 [0x282ef90,0x2d667c0), 00:08:25.138 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:25.138 INFO: A corpus is not provided, starting from an empty corpus 00:08:25.138 #2 INITED exec/s: 0 rss: 62Mb 00:08:25.138 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:25.138 This may also happen if the target rejected all inputs we tried so far 00:08:25.138 [2024-11-09 17:17:44.881566] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:25.138 [2024-11-09 17:17:44.881602] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:25.138 [2024-11-09 17:17:44.881621] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:25.656 NEW_FUNC[1/638]: 0x43a7b8 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:72 00:08:25.656 NEW_FUNC[2/638]: 0x43fdb8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:25.656 #7 NEW cov: 10778 ft: 10690 corp: 2/31b lim: 40 exec/s: 0 rss: 67Mb L: 30/30 MS: 5 ChangeBit-InsertByte-EraseBytes-ChangeBinInt-InsertRepeatedBytes- 00:08:25.656 [2024-11-09 17:17:45.344605] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:25.656 [2024-11-09 17:17:45.344641] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:25.656 [2024-11-09 17:17:45.344659] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:25.916 #11 NEW cov: 10792 ft: 13075 corp: 3/41b lim: 40 exec/s: 0 rss: 68Mb L: 10/30 MS: 4 CopyPart-ChangeByte-ChangeBit-CMP- DE: "\377\377\377\377\377\377\377\000"- 00:08:25.916 [2024-11-09 17:17:45.541963] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:25.916 [2024-11-09 17:17:45.541988] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:25.916 [2024-11-09 17:17:45.542006] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:25.916 NEW_FUNC[1/1]: 0x191add8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:25.916 #13 NEW cov: 10809 ft: 13613 corp: 4/67b lim: 40 exec/s: 0 rss: 69Mb L: 26/30 MS: 2 ChangeBit-InsertRepeatedBytes- 00:08:26.175 [2024-11-09 17:17:45.729236] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:26.175 [2024-11-09 17:17:45.729259] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:26.175 [2024-11-09 17:17:45.729277] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:26.175 #14 NEW cov: 10809 ft: 14514 corp: 5/93b lim: 40 exec/s: 14 rss: 69Mb L: 26/30 MS: 1 ChangeBit- 00:08:26.175 [2024-11-09 17:17:45.915846] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:26.175 [2024-11-09 17:17:45.915870] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:26.175 [2024-11-09 17:17:45.915887] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:26.433 #15 NEW cov: 10809 ft: 14863 corp: 6/122b lim: 40 exec/s: 15 rss: 69Mb L: 29/30 MS: 1 EraseBytes- 00:08:26.433 [2024-11-09 17:17:46.101035] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:26.433 [2024-11-09 17:17:46.101059] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:26.433 [2024-11-09 17:17:46.101075] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:26.692 #16 NEW cov: 10809 ft: 15082 corp: 7/148b lim: 40 exec/s: 16 rss: 69Mb L: 26/30 MS: 1 ChangeBinInt- 00:08:26.692 [2024-11-09 17:17:46.286301] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:26.692 [2024-11-09 17:17:46.286324] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:26.692 [2024-11-09 17:17:46.286346] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:26.692 #17 NEW cov: 10809 ft: 15363 corp: 8/181b lim: 40 exec/s: 17 rss: 70Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:08:26.951 [2024-11-09 17:17:46.473454] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:26.951 [2024-11-09 17:17:46.473477] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:26.951 [2024-11-09 17:17:46.473494] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:26.951 #18 NEW cov: 10809 ft: 15420 corp: 9/207b lim: 40 exec/s: 18 rss: 70Mb L: 26/33 MS: 1 ShuffleBytes- 00:08:26.951 [2024-11-09 17:17:46.656827] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:26.951 [2024-11-09 17:17:46.656850] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:26.951 [2024-11-09 17:17:46.656867] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:27.210 #19 NEW cov: 10816 ft: 15575 corp: 10/233b lim: 40 exec/s: 19 rss: 70Mb L: 26/33 MS: 1 ChangeBinInt- 00:08:27.210 [2024-11-09 17:17:46.841045] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:27.210 [2024-11-09 17:17:46.841067] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:27.210 [2024-11-09 17:17:46.841083] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:27.210 #20 NEW cov: 10816 ft: 15647 corp: 11/259b lim: 40 exec/s: 10 rss: 70Mb L: 26/33 MS: 1 ChangeBinInt- 00:08:27.210 #20 DONE cov: 10816 ft: 15647 corp: 11/259b lim: 40 exec/s: 10 rss: 70Mb 00:08:27.210 ###### Recommended dictionary. ###### 00:08:27.210 "\377\377\377\377\377\377\377\000" # Uses: 0 00:08:27.210 ###### End of recommended dictionary. ###### 00:08:27.210 Done 20 runs in 2 second(s) 00:08:27.470 17:17:47 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-1 00:08:27.470 17:17:47 -- ../common.sh@72 -- # (( i++ )) 00:08:27.470 17:17:47 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:27.470 17:17:47 -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:08:27.470 17:17:47 -- vfio/run.sh@22 -- # local fuzzer_type=2 00:08:27.470 17:17:47 -- vfio/run.sh@23 -- # local timen=1 00:08:27.470 17:17:47 -- vfio/run.sh@24 -- # local core=0x1 00:08:27.470 17:17:47 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:27.470 17:17:47 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:08:27.470 17:17:47 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:08:27.470 17:17:47 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:08:27.470 17:17:47 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:08:27.470 17:17:47 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:27.470 17:17:47 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:08:27.470 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:27.470 17:17:47 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:08:27.730 [2024-11-09 17:17:47.259453] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:27.730 [2024-11-09 17:17:47.259543] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid378124 ] 00:08:27.730 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.730 [2024-11-09 17:17:47.333305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.730 [2024-11-09 17:17:47.402403] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:27.730 [2024-11-09 17:17:47.402576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.989 INFO: Running with entropic power schedule (0xFF, 100). 00:08:27.989 INFO: Seed: 1212514242 00:08:27.989 INFO: Loaded 1 modules (341891 inline 8-bit counters): 341891 [0x27db80c, 0x282ef8f), 00:08:27.989 INFO: Loaded 1 PC tables (341891 PCs): 341891 [0x282ef90,0x2d667c0), 00:08:27.989 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:27.989 INFO: A corpus is not provided, starting from an empty corpus 00:08:27.989 #2 INITED exec/s: 0 rss: 61Mb 00:08:27.989 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:27.989 This may also happen if the target rejected all inputs we tried so far 00:08:27.989 [2024-11-09 17:17:47.727920] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:28.508 NEW_FUNC[1/636]: 0x43b1a8 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:104 00:08:28.508 NEW_FUNC[2/636]: 0x43fdb8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:28.508 #5 NEW cov: 10755 ft: 10574 corp: 2/62b lim: 80 exec/s: 0 rss: 67Mb L: 61/61 MS: 3 CrossOver-ShuffleBytes-InsertRepeatedBytes- 00:08:28.508 [2024-11-09 17:17:48.213571] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:28.767 #7 NEW cov: 10772 ft: 13578 corp: 3/131b lim: 80 exec/s: 0 rss: 69Mb L: 69/69 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:28.767 [2024-11-09 17:17:48.418336] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:29.068 NEW_FUNC[1/1]: 0x191add8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:29.068 #8 NEW cov: 10789 ft: 15680 corp: 4/192b lim: 80 exec/s: 0 rss: 70Mb L: 61/69 MS: 1 ShuffleBytes- 00:08:29.068 [2024-11-09 17:17:48.629546] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:29.068 #9 NEW cov: 10789 ft: 16128 corp: 5/225b lim: 80 exec/s: 9 rss: 70Mb L: 33/69 MS: 1 CrossOver- 00:08:29.356 [2024-11-09 17:17:48.831021] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:29.356 #10 NEW cov: 10789 ft: 16464 corp: 6/294b lim: 80 exec/s: 10 rss: 70Mb L: 69/69 MS: 1 ChangeBit- 00:08:29.356 [2024-11-09 17:17:49.025685] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:29.616 #11 NEW cov: 10789 ft: 17108 corp: 7/332b lim: 80 exec/s: 11 rss: 70Mb L: 38/69 MS: 1 EraseBytes- 00:08:29.616 [2024-11-09 17:17:49.219303] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:29.616 #12 NEW cov: 10789 ft: 17512 corp: 8/365b lim: 80 exec/s: 12 rss: 70Mb L: 33/69 MS: 1 CopyPart- 00:08:29.874 [2024-11-09 17:17:49.425753] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:29.875 #13 NEW cov: 10796 ft: 17756 corp: 9/438b lim: 80 exec/s: 13 rss: 70Mb L: 73/73 MS: 1 InsertRepeatedBytes- 00:08:29.875 [2024-11-09 17:17:49.623428] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:08:29.875 [2024-11-09 17:17:49.623460] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:08:30.133 NEW_FUNC[1/2]: 0x13176f8 in endpoint_id /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:638 00:08:30.134 NEW_FUNC[2/2]: 0x1317998 in vfio_user_log /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:3084 00:08:30.134 #22 NEW cov: 10809 ft: 18096 corp: 10/485b lim: 80 exec/s: 11 rss: 70Mb L: 47/73 MS: 4 ChangeBit-ShuffleBytes-ChangeBinInt-InsertRepeatedBytes- 00:08:30.134 #22 DONE cov: 10809 ft: 18096 corp: 10/485b lim: 80 exec/s: 11 rss: 70Mb 00:08:30.134 Done 22 runs in 2 second(s) 00:08:30.392 17:17:50 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-2 00:08:30.392 17:17:50 -- ../common.sh@72 -- # (( i++ )) 00:08:30.392 17:17:50 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:30.392 17:17:50 -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:08:30.392 17:17:50 -- vfio/run.sh@22 -- # local fuzzer_type=3 00:08:30.392 17:17:50 -- vfio/run.sh@23 -- # local timen=1 00:08:30.392 17:17:50 -- vfio/run.sh@24 -- # local core=0x1 00:08:30.392 17:17:50 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:30.392 17:17:50 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:08:30.392 17:17:50 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:08:30.392 17:17:50 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:08:30.392 17:17:50 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:08:30.392 17:17:50 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:30.392 17:17:50 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:08:30.392 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:30.392 17:17:50 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:08:30.392 [2024-11-09 17:17:50.064976] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:30.392 [2024-11-09 17:17:50.065046] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid378585 ] 00:08:30.392 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.392 [2024-11-09 17:17:50.138211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.651 [2024-11-09 17:17:50.212170] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:30.651 [2024-11-09 17:17:50.212320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.651 INFO: Running with entropic power schedule (0xFF, 100). 00:08:30.651 INFO: Seed: 4018533351 00:08:30.909 INFO: Loaded 1 modules (341891 inline 8-bit counters): 341891 [0x27db80c, 0x282ef8f), 00:08:30.909 INFO: Loaded 1 PC tables (341891 PCs): 341891 [0x282ef90,0x2d667c0), 00:08:30.909 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:30.909 INFO: A corpus is not provided, starting from an empty corpus 00:08:30.909 #2 INITED exec/s: 0 rss: 62Mb 00:08:30.909 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:30.909 This may also happen if the target rejected all inputs we tried so far 00:08:31.167 NEW_FUNC[1/631]: 0x43b898 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:125 00:08:31.167 NEW_FUNC[2/631]: 0x43fdb8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:31.167 #8 NEW cov: 10748 ft: 10713 corp: 2/97b lim: 320 exec/s: 0 rss: 67Mb L: 96/96 MS: 1 InsertRepeatedBytes- 00:08:31.426 NEW_FUNC[1/1]: 0x1c256c8 in spdk_get_thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1255 00:08:31.426 #11 NEW cov: 10766 ft: 13707 corp: 3/195b lim: 320 exec/s: 0 rss: 68Mb L: 98/98 MS: 3 CopyPart-CopyPart-CrossOver- 00:08:31.685 NEW_FUNC[1/1]: 0x191add8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:31.685 #12 NEW cov: 10783 ft: 15254 corp: 4/299b lim: 320 exec/s: 0 rss: 69Mb L: 104/104 MS: 1 InsertRepeatedBytes- 00:08:31.944 #13 NEW cov: 10783 ft: 15511 corp: 5/396b lim: 320 exec/s: 13 rss: 69Mb L: 97/104 MS: 1 CrossOver- 00:08:32.203 #14 NEW cov: 10783 ft: 16079 corp: 6/494b lim: 320 exec/s: 14 rss: 69Mb L: 98/104 MS: 1 ChangeByte- 00:08:32.203 #15 NEW cov: 10783 ft: 16140 corp: 7/591b lim: 320 exec/s: 15 rss: 69Mb L: 97/104 MS: 1 EraseBytes- 00:08:32.462 #18 NEW cov: 10783 ft: 16280 corp: 8/664b lim: 320 exec/s: 18 rss: 69Mb L: 73/104 MS: 3 CopyPart-ChangeBit-InsertRepeatedBytes- 00:08:32.721 #19 NEW cov: 10790 ft: 16559 corp: 9/768b lim: 320 exec/s: 19 rss: 69Mb L: 104/104 MS: 1 ShuffleBytes- 00:08:32.980 #20 NEW cov: 10790 ft: 16649 corp: 10/872b lim: 320 exec/s: 10 rss: 69Mb L: 104/104 MS: 1 CMP- DE: "\007\223\217\005\000\000\000\000"- 00:08:32.980 #20 DONE cov: 10790 ft: 16649 corp: 10/872b lim: 320 exec/s: 10 rss: 69Mb 00:08:32.980 ###### Recommended dictionary. ###### 00:08:32.980 "\007\223\217\005\000\000\000\000" # Uses: 0 00:08:32.980 ###### End of recommended dictionary. ###### 00:08:32.980 Done 20 runs in 2 second(s) 00:08:33.240 17:17:52 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-3 00:08:33.240 17:17:52 -- ../common.sh@72 -- # (( i++ )) 00:08:33.240 17:17:52 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:33.240 17:17:52 -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:08:33.240 17:17:52 -- vfio/run.sh@22 -- # local fuzzer_type=4 00:08:33.240 17:17:52 -- vfio/run.sh@23 -- # local timen=1 00:08:33.240 17:17:52 -- vfio/run.sh@24 -- # local core=0x1 00:08:33.240 17:17:52 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:33.240 17:17:52 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:08:33.240 17:17:52 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:08:33.240 17:17:52 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:08:33.240 17:17:52 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:08:33.240 17:17:52 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:33.240 17:17:52 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:08:33.240 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:33.240 17:17:52 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:08:33.240 [2024-11-09 17:17:52.817417] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:33.240 [2024-11-09 17:17:52.817485] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid379130 ] 00:08:33.240 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.240 [2024-11-09 17:17:52.889125] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.240 [2024-11-09 17:17:52.956040] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:33.240 [2024-11-09 17:17:52.956185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.499 INFO: Running with entropic power schedule (0xFF, 100). 00:08:33.500 INFO: Seed: 2457561252 00:08:33.500 INFO: Loaded 1 modules (341891 inline 8-bit counters): 341891 [0x27db80c, 0x282ef8f), 00:08:33.500 INFO: Loaded 1 PC tables (341891 PCs): 341891 [0x282ef90,0x2d667c0), 00:08:33.500 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:33.500 INFO: A corpus is not provided, starting from an empty corpus 00:08:33.500 #2 INITED exec/s: 0 rss: 62Mb 00:08:33.500 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:33.500 This may also happen if the target rejected all inputs we tried so far 00:08:34.021 NEW_FUNC[1/627]: 0x43c118 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:145 00:08:34.021 NEW_FUNC[2/627]: 0x43fdb8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:34.021 #27 NEW cov: 10608 ft: 10707 corp: 2/144b lim: 320 exec/s: 0 rss: 67Mb L: 143/143 MS: 5 CrossOver-ChangeBinInt-InsertRepeatedBytes-InsertRepeatedBytes-InsertRepeatedBytes- 00:08:34.021 [2024-11-09 17:17:53.714005] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=325 offset=0 prot=0x3: Invalid argument 00:08:34.021 [2024-11-09 17:17:53.714048] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0 flags=0x3: Invalid argument 00:08:34.021 [2024-11-09 17:17:53.714060] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:08:34.021 [2024-11-09 17:17:53.714081] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:34.021 [2024-11-09 17:17:53.715001] vfio_user.c:3094:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:08:34.021 [2024-11-09 17:17:53.715021] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:34.021 [2024-11-09 17:17:53.715037] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:34.280 NEW_FUNC[1/11]: 0x10b7bb8 in nvmf_qpair_request_cleanup /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:4329 00:08:34.280 NEW_FUNC[2/11]: 0x13176f8 in endpoint_id /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:638 00:08:34.280 #34 NEW cov: 10802 ft: 13578 corp: 3/242b lim: 320 exec/s: 0 rss: 69Mb L: 98/143 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:08:34.280 NEW_FUNC[1/1]: 0x191add8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:34.280 #35 NEW cov: 10819 ft: 14671 corp: 4/518b lim: 320 exec/s: 0 rss: 70Mb L: 276/276 MS: 1 CrossOver- 00:08:34.540 [2024-11-09 17:17:54.097253] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=325 offset=0 prot=0x3: Invalid argument 00:08:34.540 [2024-11-09 17:17:54.097279] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0 flags=0x3: Invalid argument 00:08:34.540 [2024-11-09 17:17:54.097291] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:08:34.540 [2024-11-09 17:17:54.097307] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:34.540 [2024-11-09 17:17:54.098276] vfio_user.c:3094:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:08:34.540 [2024-11-09 17:17:54.098297] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:34.540 [2024-11-09 17:17:54.098314] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:34.540 #36 NEW cov: 10819 ft: 15132 corp: 5/616b lim: 320 exec/s: 36 rss: 70Mb L: 98/276 MS: 1 ChangeByte- 00:08:34.540 [2024-11-09 17:17:54.287329] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=325 offset=0 prot=0x3: Invalid argument 00:08:34.540 [2024-11-09 17:17:54.287352] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0 flags=0x3: Invalid argument 00:08:34.540 [2024-11-09 17:17:54.287363] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:08:34.540 [2024-11-09 17:17:54.287379] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:34.540 [2024-11-09 17:17:54.288372] vfio_user.c:3094:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:08:34.540 [2024-11-09 17:17:54.288391] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:34.540 [2024-11-09 17:17:54.288407] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:34.800 #37 NEW cov: 10819 ft: 16065 corp: 6/714b lim: 320 exec/s: 37 rss: 71Mb L: 98/276 MS: 1 CMP- DE: "\001\000\000\005"- 00:08:34.800 [2024-11-09 17:17:54.473634] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=325 offset=0 prot=0x3: Invalid argument 00:08:34.800 [2024-11-09 17:17:54.473656] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0 flags=0x3: Invalid argument 00:08:34.800 [2024-11-09 17:17:54.473666] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:08:34.800 [2024-11-09 17:17:54.473697] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:34.800 [2024-11-09 17:17:54.474651] vfio_user.c:3094:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:08:34.800 [2024-11-09 17:17:54.474674] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:34.800 [2024-11-09 17:17:54.474690] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:35.059 #38 NEW cov: 10819 ft: 16210 corp: 7/812b lim: 320 exec/s: 38 rss: 71Mb L: 98/276 MS: 1 ChangeBit- 00:08:35.059 [2024-11-09 17:17:54.659895] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=325 offset=0 prot=0x3: Invalid argument 00:08:35.059 [2024-11-09 17:17:54.659919] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0 flags=0x3: Invalid argument 00:08:35.059 [2024-11-09 17:17:54.659929] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:08:35.059 [2024-11-09 17:17:54.659945] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:35.059 [2024-11-09 17:17:54.660898] vfio_user.c:3094:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:08:35.059 [2024-11-09 17:17:54.660918] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:35.059 [2024-11-09 17:17:54.660933] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:35.059 #39 NEW cov: 10819 ft: 16536 corp: 8/910b lim: 320 exec/s: 39 rss: 71Mb L: 98/276 MS: 1 ShuffleBytes- 00:08:35.319 [2024-11-09 17:17:54.847634] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=325 offset=0 prot=0x3: Invalid argument 00:08:35.319 [2024-11-09 17:17:54.847668] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0 flags=0x3: Invalid argument 00:08:35.319 [2024-11-09 17:17:54.847678] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:08:35.319 [2024-11-09 17:17:54.847710] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:35.319 [2024-11-09 17:17:54.848632] vfio_user.c:3094:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:08:35.319 [2024-11-09 17:17:54.848653] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:35.319 [2024-11-09 17:17:54.848669] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:35.319 #40 NEW cov: 10819 ft: 16830 corp: 9/1076b lim: 320 exec/s: 40 rss: 71Mb L: 166/276 MS: 1 InsertRepeatedBytes- 00:08:35.319 [2024-11-09 17:17:55.036478] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=325 offset=0 prot=0x3: Invalid argument 00:08:35.319 [2024-11-09 17:17:55.036501] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0 flags=0x3: Invalid argument 00:08:35.319 [2024-11-09 17:17:55.036511] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:08:35.319 [2024-11-09 17:17:55.036545] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:35.319 [2024-11-09 17:17:55.037486] vfio_user.c:3094:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:08:35.319 [2024-11-09 17:17:55.037505] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:35.319 [2024-11-09 17:17:55.037520] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:35.579 #41 NEW cov: 10826 ft: 17147 corp: 10/1174b lim: 320 exec/s: 41 rss: 71Mb L: 98/276 MS: 1 CopyPart- 00:08:35.579 [2024-11-09 17:17:55.223028] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0x1c1c1c0000000000, 0x1c1c1c1c1c1c1c1c) fd=325 offset=0 prot=0x3: Cannot allocate memory 00:08:35.579 [2024-11-09 17:17:55.223052] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x1c1c1c0000000000, 0x1c1c1c1c1c1c1c1c) offset=0 flags=0x3: Cannot allocate memory 00:08:35.579 [2024-11-09 17:17:55.223066] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Cannot allocate memory 00:08:35.579 [2024-11-09 17:17:55.223083] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:35.579 [2024-11-09 17:17:55.224045] vfio_user.c:3094:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x1c1c1c0000000000, 0x1c1c1c1c1c1c1c1c) flags=0: No such file or directory 00:08:35.579 [2024-11-09 17:17:55.224065] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:35.579 [2024-11-09 17:17:55.224080] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:35.579 #42 NEW cov: 10826 ft: 17445 corp: 11/1311b lim: 320 exec/s: 21 rss: 71Mb L: 137/276 MS: 1 CrossOver- 00:08:35.579 #42 DONE cov: 10826 ft: 17445 corp: 11/1311b lim: 320 exec/s: 21 rss: 71Mb 00:08:35.579 ###### Recommended dictionary. ###### 00:08:35.579 "\001\000\000\005" # Uses: 0 00:08:35.579 ###### End of recommended dictionary. ###### 00:08:35.579 Done 42 runs in 2 second(s) 00:08:35.839 17:17:55 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-4 00:08:35.839 17:17:55 -- ../common.sh@72 -- # (( i++ )) 00:08:35.839 17:17:55 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:35.839 17:17:55 -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:08:35.839 17:17:55 -- vfio/run.sh@22 -- # local fuzzer_type=5 00:08:35.839 17:17:55 -- vfio/run.sh@23 -- # local timen=1 00:08:35.839 17:17:55 -- vfio/run.sh@24 -- # local core=0x1 00:08:35.839 17:17:55 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:35.839 17:17:55 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:08:35.839 17:17:55 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:08:35.839 17:17:55 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:08:35.839 17:17:55 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:08:35.839 17:17:55 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:35.839 17:17:55 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:08:35.839 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:36.099 17:17:55 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:08:36.099 [2024-11-09 17:17:55.642732] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:36.099 [2024-11-09 17:17:55.642824] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid379678 ] 00:08:36.099 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.099 [2024-11-09 17:17:55.713783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.099 [2024-11-09 17:17:55.780779] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:36.099 [2024-11-09 17:17:55.780924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.359 INFO: Running with entropic power schedule (0xFF, 100). 00:08:36.359 INFO: Seed: 984588864 00:08:36.359 INFO: Loaded 1 modules (341891 inline 8-bit counters): 341891 [0x27db80c, 0x282ef8f), 00:08:36.359 INFO: Loaded 1 PC tables (341891 PCs): 341891 [0x282ef90,0x2d667c0), 00:08:36.359 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:36.359 INFO: A corpus is not provided, starting from an empty corpus 00:08:36.359 #2 INITED exec/s: 0 rss: 62Mb 00:08:36.359 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:36.359 This may also happen if the target rejected all inputs we tried so far 00:08:36.359 [2024-11-09 17:17:56.052552] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:36.359 [2024-11-09 17:17:56.052605] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:36.877 NEW_FUNC[1/638]: 0x43cb18 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:172 00:08:36.877 NEW_FUNC[2/638]: 0x43fdb8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:36.877 #11 NEW cov: 10777 ft: 10633 corp: 2/119b lim: 120 exec/s: 0 rss: 68Mb L: 118/118 MS: 4 ChangeByte-ChangeByte-ShuffleBytes-InsertRepeatedBytes- 00:08:36.877 [2024-11-09 17:17:56.524537] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:36.877 [2024-11-09 17:17:56.524583] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:36.877 #17 NEW cov: 10791 ft: 13285 corp: 3/238b lim: 120 exec/s: 0 rss: 69Mb L: 119/119 MS: 1 InsertByte- 00:08:37.136 [2024-11-09 17:17:56.702034] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.136 [2024-11-09 17:17:56.702066] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.136 NEW_FUNC[1/1]: 0x191add8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:37.137 #21 NEW cov: 10811 ft: 14482 corp: 4/300b lim: 120 exec/s: 0 rss: 70Mb L: 62/119 MS: 4 ChangeBinInt-CopyPart-InsertByte-InsertRepeatedBytes- 00:08:37.137 [2024-11-09 17:17:56.887281] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.137 [2024-11-09 17:17:56.887312] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.397 #22 NEW cov: 10811 ft: 14689 corp: 5/347b lim: 120 exec/s: 22 rss: 70Mb L: 47/119 MS: 1 EraseBytes- 00:08:37.397 [2024-11-09 17:17:57.061649] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.397 [2024-11-09 17:17:57.061680] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.656 #23 NEW cov: 10811 ft: 15196 corp: 6/396b lim: 120 exec/s: 23 rss: 70Mb L: 49/119 MS: 1 CMP- DE: "|\000"- 00:08:37.656 [2024-11-09 17:17:57.236923] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.656 [2024-11-09 17:17:57.236955] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.656 #24 NEW cov: 10811 ft: 15398 corp: 7/516b lim: 120 exec/s: 24 rss: 70Mb L: 120/120 MS: 1 InsertByte- 00:08:37.656 [2024-11-09 17:17:57.411501] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.657 [2024-11-09 17:17:57.411540] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.916 #25 NEW cov: 10811 ft: 15430 corp: 8/636b lim: 120 exec/s: 25 rss: 70Mb L: 120/120 MS: 1 PersAutoDict- DE: "|\000"- 00:08:37.916 [2024-11-09 17:17:57.587628] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.916 [2024-11-09 17:17:57.587675] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:38.175 #26 NEW cov: 10811 ft: 15499 corp: 9/683b lim: 120 exec/s: 26 rss: 70Mb L: 47/120 MS: 1 ShuffleBytes- 00:08:38.175 [2024-11-09 17:17:57.762682] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:38.175 [2024-11-09 17:17:57.762713] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:38.175 #27 NEW cov: 10818 ft: 16611 corp: 10/733b lim: 120 exec/s: 27 rss: 70Mb L: 50/120 MS: 1 CopyPart- 00:08:38.175 [2024-11-09 17:17:57.937775] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:38.175 [2024-11-09 17:17:57.937805] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:38.433 #28 NEW cov: 10818 ft: 17334 corp: 11/795b lim: 120 exec/s: 14 rss: 70Mb L: 62/120 MS: 1 ChangeByte- 00:08:38.433 #28 DONE cov: 10818 ft: 17334 corp: 11/795b lim: 120 exec/s: 14 rss: 70Mb 00:08:38.433 ###### Recommended dictionary. ###### 00:08:38.433 "|\000" # Uses: 1 00:08:38.433 ###### End of recommended dictionary. ###### 00:08:38.434 Done 28 runs in 2 second(s) 00:08:38.692 17:17:58 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-5 00:08:38.693 17:17:58 -- ../common.sh@72 -- # (( i++ )) 00:08:38.693 17:17:58 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:38.693 17:17:58 -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:08:38.693 17:17:58 -- vfio/run.sh@22 -- # local fuzzer_type=6 00:08:38.693 17:17:58 -- vfio/run.sh@23 -- # local timen=1 00:08:38.693 17:17:58 -- vfio/run.sh@24 -- # local core=0x1 00:08:38.693 17:17:58 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:38.693 17:17:58 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:08:38.693 17:17:58 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:08:38.693 17:17:58 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:08:38.693 17:17:58 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:08:38.693 17:17:58 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:38.693 17:17:58 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:08:38.693 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:38.693 17:17:58 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:08:38.693 [2024-11-09 17:17:58.351944] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:38.693 [2024-11-09 17:17:58.352018] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid380180 ] 00:08:38.693 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.693 [2024-11-09 17:17:58.424211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.952 [2024-11-09 17:17:58.493838] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:38.952 [2024-11-09 17:17:58.493999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.952 INFO: Running with entropic power schedule (0xFF, 100). 00:08:38.952 INFO: Seed: 3702588819 00:08:38.952 INFO: Loaded 1 modules (341891 inline 8-bit counters): 341891 [0x27db80c, 0x282ef8f), 00:08:38.952 INFO: Loaded 1 PC tables (341891 PCs): 341891 [0x282ef90,0x2d667c0), 00:08:38.952 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:38.952 INFO: A corpus is not provided, starting from an empty corpus 00:08:38.952 #2 INITED exec/s: 0 rss: 62Mb 00:08:38.952 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:38.952 This may also happen if the target rejected all inputs we tried so far 00:08:39.211 [2024-11-09 17:17:58.797557] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:39.211 [2024-11-09 17:17:58.797600] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:39.471 NEW_FUNC[1/638]: 0x43d808 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:08:39.471 NEW_FUNC[2/638]: 0x43fdb8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:39.471 #14 NEW cov: 10772 ft: 10744 corp: 2/10b lim: 90 exec/s: 0 rss: 68Mb L: 9/9 MS: 2 ShuffleBytes-CMP- DE: "\000\000\000\000\000\000\0038"- 00:08:39.730 [2024-11-09 17:17:59.254349] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:39.730 [2024-11-09 17:17:59.254394] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:39.730 #17 NEW cov: 10786 ft: 13887 corp: 3/68b lim: 90 exec/s: 0 rss: 69Mb L: 58/58 MS: 3 ChangeByte-ChangeBit-InsertRepeatedBytes- 00:08:39.730 [2024-11-09 17:17:59.448960] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:39.730 [2024-11-09 17:17:59.448994] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:39.989 NEW_FUNC[1/1]: 0x191add8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:39.990 #18 NEW cov: 10803 ft: 14987 corp: 4/149b lim: 90 exec/s: 0 rss: 70Mb L: 81/81 MS: 1 InsertRepeatedBytes- 00:08:39.990 [2024-11-09 17:17:59.635550] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:39.990 [2024-11-09 17:17:59.635592] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:39.990 #19 NEW cov: 10803 ft: 15311 corp: 5/230b lim: 90 exec/s: 19 rss: 70Mb L: 81/81 MS: 1 ChangeBinInt- 00:08:40.249 [2024-11-09 17:17:59.821832] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:40.249 [2024-11-09 17:17:59.821863] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:40.249 #27 NEW cov: 10803 ft: 15434 corp: 6/242b lim: 90 exec/s: 27 rss: 71Mb L: 12/81 MS: 3 InsertByte-CopyPart-PersAutoDict- DE: "\000\000\000\000\000\000\0038"- 00:08:40.249 [2024-11-09 17:18:00.005302] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:40.249 [2024-11-09 17:18:00.005333] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:40.507 #28 NEW cov: 10803 ft: 16177 corp: 7/251b lim: 90 exec/s: 28 rss: 71Mb L: 9/81 MS: 1 CopyPart- 00:08:40.507 [2024-11-09 17:18:00.190312] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:40.507 [2024-11-09 17:18:00.190347] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:40.769 #29 NEW cov: 10803 ft: 16541 corp: 8/263b lim: 90 exec/s: 29 rss: 71Mb L: 12/81 MS: 1 ChangeBit- 00:08:40.769 [2024-11-09 17:18:00.376222] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:40.769 [2024-11-09 17:18:00.376254] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:40.769 #30 NEW cov: 10803 ft: 16722 corp: 9/272b lim: 90 exec/s: 30 rss: 71Mb L: 9/81 MS: 1 ChangeByte- 00:08:41.028 [2024-11-09 17:18:00.562227] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:41.028 [2024-11-09 17:18:00.562258] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:41.028 #31 NEW cov: 10810 ft: 16742 corp: 10/289b lim: 90 exec/s: 31 rss: 71Mb L: 17/81 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\0038"- 00:08:41.028 [2024-11-09 17:18:00.745750] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:41.028 [2024-11-09 17:18:00.745782] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:41.288 #32 pulse cov: 10810 ft: 16962 corp: 10/289b lim: 90 exec/s: 16 rss: 71Mb 00:08:41.288 #32 NEW cov: 10810 ft: 16962 corp: 11/302b lim: 90 exec/s: 16 rss: 71Mb L: 13/81 MS: 1 InsertByte- 00:08:41.288 #32 DONE cov: 10810 ft: 16962 corp: 11/302b lim: 90 exec/s: 16 rss: 71Mb 00:08:41.288 ###### Recommended dictionary. ###### 00:08:41.288 "\000\000\000\000\000\000\0038" # Uses: 2 00:08:41.288 ###### End of recommended dictionary. ###### 00:08:41.288 Done 32 runs in 2 second(s) 00:08:41.548 17:18:01 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-6 00:08:41.548 17:18:01 -- ../common.sh@72 -- # (( i++ )) 00:08:41.548 17:18:01 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:41.548 17:18:01 -- vfio/run.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:08:41.548 00:08:41.548 real 0m19.899s 00:08:41.548 user 0m28.055s 00:08:41.548 sys 0m1.873s 00:08:41.548 17:18:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:41.548 17:18:01 -- common/autotest_common.sh@10 -- # set +x 00:08:41.548 ************************************ 00:08:41.548 END TEST vfio_fuzz 00:08:41.548 ************************************ 00:08:41.548 00:08:41.548 real 1m24.284s 00:08:41.548 user 2m8.799s 00:08:41.548 sys 0m9.217s 00:08:41.548 17:18:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:41.548 17:18:01 -- common/autotest_common.sh@10 -- # set +x 00:08:41.548 ************************************ 00:08:41.548 END TEST llvm_fuzz 00:08:41.548 ************************************ 00:08:41.548 17:18:01 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:08:41.548 17:18:01 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:08:41.548 17:18:01 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:08:41.548 17:18:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:41.548 17:18:01 -- common/autotest_common.sh@10 -- # set +x 00:08:41.548 17:18:01 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:08:41.548 17:18:01 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:08:41.548 17:18:01 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:08:41.548 17:18:01 -- common/autotest_common.sh@10 -- # set +x 00:08:48.124 INFO: APP EXITING 00:08:48.125 INFO: killing all VMs 00:08:48.125 INFO: killing vhost app 00:08:48.125 INFO: EXIT DONE 00:08:50.663 Waiting for block devices as requested 00:08:50.663 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:50.663 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:50.663 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:50.922 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:50.922 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:50.922 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:51.182 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:51.182 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:51.182 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:51.442 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:51.442 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:51.442 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:51.442 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:51.701 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:51.701 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:51.701 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:51.961 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:08:56.158 Cleaning 00:08:56.158 Removing: /dev/shm/spdk_tgt_trace.pid341989 00:08:56.158 Removing: /var/run/dpdk/spdk_pid339485 00:08:56.158 Removing: /var/run/dpdk/spdk_pid340774 00:08:56.158 Removing: /var/run/dpdk/spdk_pid341989 00:08:56.158 Removing: /var/run/dpdk/spdk_pid342792 00:08:56.158 Removing: /var/run/dpdk/spdk_pid343112 00:08:56.158 Removing: /var/run/dpdk/spdk_pid343449 00:08:56.158 Removing: /var/run/dpdk/spdk_pid343793 00:08:56.158 Removing: /var/run/dpdk/spdk_pid344131 00:08:56.158 Removing: /var/run/dpdk/spdk_pid344416 00:08:56.158 Removing: /var/run/dpdk/spdk_pid344698 00:08:56.158 Removing: /var/run/dpdk/spdk_pid345021 00:08:56.158 Removing: /var/run/dpdk/spdk_pid345895 00:08:56.158 Removing: /var/run/dpdk/spdk_pid349649 00:08:56.158 Removing: /var/run/dpdk/spdk_pid349965 00:08:56.158 Removing: /var/run/dpdk/spdk_pid350274 00:08:56.158 Removing: /var/run/dpdk/spdk_pid350522 00:08:56.158 Removing: /var/run/dpdk/spdk_pid351096 00:08:56.158 Removing: /var/run/dpdk/spdk_pid351280 00:08:56.158 Removing: /var/run/dpdk/spdk_pid351682 00:08:56.158 Removing: /var/run/dpdk/spdk_pid351944 00:08:56.158 Removing: /var/run/dpdk/spdk_pid352250 00:08:56.158 Removing: /var/run/dpdk/spdk_pid352377 00:08:56.158 Removing: /var/run/dpdk/spdk_pid352560 00:08:56.158 Removing: /var/run/dpdk/spdk_pid352827 00:08:56.158 Removing: /var/run/dpdk/spdk_pid353208 00:08:56.158 Removing: /var/run/dpdk/spdk_pid353496 00:08:56.158 Removing: /var/run/dpdk/spdk_pid353779 00:08:56.158 Removing: /var/run/dpdk/spdk_pid354013 00:08:56.158 Removing: /var/run/dpdk/spdk_pid354227 00:08:56.158 Removing: /var/run/dpdk/spdk_pid354430 00:08:56.158 Removing: /var/run/dpdk/spdk_pid354496 00:08:56.158 Removing: /var/run/dpdk/spdk_pid354769 00:08:56.158 Removing: /var/run/dpdk/spdk_pid355051 00:08:56.158 Removing: /var/run/dpdk/spdk_pid355269 00:08:56.158 Removing: /var/run/dpdk/spdk_pid355480 00:08:56.158 Removing: /var/run/dpdk/spdk_pid355644 00:08:56.158 Removing: /var/run/dpdk/spdk_pid355915 00:08:56.158 Removing: /var/run/dpdk/spdk_pid356190 00:08:56.158 Removing: /var/run/dpdk/spdk_pid356474 00:08:56.158 Removing: /var/run/dpdk/spdk_pid356740 00:08:56.158 Removing: /var/run/dpdk/spdk_pid357029 00:08:56.158 Removing: /var/run/dpdk/spdk_pid357268 00:08:56.158 Removing: /var/run/dpdk/spdk_pid357477 00:08:56.158 Removing: /var/run/dpdk/spdk_pid357644 00:08:56.158 Removing: /var/run/dpdk/spdk_pid357895 00:08:56.158 Removing: /var/run/dpdk/spdk_pid358162 00:08:56.158 Removing: /var/run/dpdk/spdk_pid358451 00:08:56.158 Removing: /var/run/dpdk/spdk_pid358722 00:08:56.158 Removing: /var/run/dpdk/spdk_pid359003 00:08:56.158 Removing: /var/run/dpdk/spdk_pid359266 00:08:56.158 Removing: /var/run/dpdk/spdk_pid359469 00:08:56.158 Removing: /var/run/dpdk/spdk_pid359630 00:08:56.158 Removing: /var/run/dpdk/spdk_pid359865 00:08:56.158 Removing: /var/run/dpdk/spdk_pid360139 00:08:56.158 Removing: /var/run/dpdk/spdk_pid360420 00:08:56.158 Removing: /var/run/dpdk/spdk_pid360688 00:08:56.158 Removing: /var/run/dpdk/spdk_pid360979 00:08:56.158 Removing: /var/run/dpdk/spdk_pid361247 00:08:56.158 Removing: /var/run/dpdk/spdk_pid361455 00:08:56.158 Removing: /var/run/dpdk/spdk_pid361628 00:08:56.158 Removing: /var/run/dpdk/spdk_pid361850 00:08:56.158 Removing: /var/run/dpdk/spdk_pid362111 00:08:56.158 Removing: /var/run/dpdk/spdk_pid362401 00:08:56.158 Removing: /var/run/dpdk/spdk_pid362675 00:08:56.158 Removing: /var/run/dpdk/spdk_pid362961 00:08:56.158 Removing: /var/run/dpdk/spdk_pid363236 00:08:56.158 Removing: /var/run/dpdk/spdk_pid363487 00:08:56.158 Removing: /var/run/dpdk/spdk_pid363651 00:08:56.158 Removing: /var/run/dpdk/spdk_pid363861 00:08:56.158 Removing: /var/run/dpdk/spdk_pid364101 00:08:56.158 Removing: /var/run/dpdk/spdk_pid364383 00:08:56.158 Removing: /var/run/dpdk/spdk_pid364654 00:08:56.158 Removing: /var/run/dpdk/spdk_pid365062 00:08:56.158 Removing: /var/run/dpdk/spdk_pid365617 00:08:56.158 Removing: /var/run/dpdk/spdk_pid366112 00:08:56.158 Removing: /var/run/dpdk/spdk_pid366649 00:08:56.158 Removing: /var/run/dpdk/spdk_pid366953 00:08:56.158 Removing: /var/run/dpdk/spdk_pid367485 00:08:56.158 Removing: /var/run/dpdk/spdk_pid368002 00:08:56.158 Removing: /var/run/dpdk/spdk_pid368321 00:08:56.158 Removing: /var/run/dpdk/spdk_pid368858 00:08:56.158 Removing: /var/run/dpdk/spdk_pid369338 00:08:56.158 Removing: /var/run/dpdk/spdk_pid369696 00:08:56.158 Removing: /var/run/dpdk/spdk_pid370239 00:08:56.158 Removing: /var/run/dpdk/spdk_pid370671 00:08:56.158 Removing: /var/run/dpdk/spdk_pid371072 00:08:56.158 Removing: /var/run/dpdk/spdk_pid371614 00:08:56.158 Removing: /var/run/dpdk/spdk_pid372024 00:08:56.158 Removing: /var/run/dpdk/spdk_pid372445 00:08:56.158 Removing: /var/run/dpdk/spdk_pid372988 00:08:56.158 Removing: /var/run/dpdk/spdk_pid373303 00:08:56.158 Removing: /var/run/dpdk/spdk_pid373824 00:08:56.158 Removing: /var/run/dpdk/spdk_pid374360 00:08:56.158 Removing: /var/run/dpdk/spdk_pid374664 00:08:56.158 Removing: /var/run/dpdk/spdk_pid375198 00:08:56.158 Removing: /var/run/dpdk/spdk_pid375690 00:08:56.158 Removing: /var/run/dpdk/spdk_pid376034 00:08:56.158 Removing: /var/run/dpdk/spdk_pid376568 00:08:56.158 Removing: /var/run/dpdk/spdk_pid377197 00:08:56.158 Removing: /var/run/dpdk/spdk_pid377713 00:08:56.158 Removing: /var/run/dpdk/spdk_pid378124 00:08:56.158 Removing: /var/run/dpdk/spdk_pid378585 00:08:56.158 Removing: /var/run/dpdk/spdk_pid379130 00:08:56.158 Removing: /var/run/dpdk/spdk_pid379678 00:08:56.159 Removing: /var/run/dpdk/spdk_pid380180 00:08:56.159 Clean 00:08:56.159 killing process with pid 292184 00:09:00.353 killing process with pid 292181 00:09:00.353 killing process with pid 292183 00:09:00.353 killing process with pid 292182 00:09:00.353 17:18:19 -- common/autotest_common.sh@1446 -- # return 0 00:09:00.353 17:18:19 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:09:00.353 17:18:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:00.353 17:18:19 -- common/autotest_common.sh@10 -- # set +x 00:09:00.353 17:18:19 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:09:00.353 17:18:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:00.353 17:18:19 -- common/autotest_common.sh@10 -- # set +x 00:09:00.353 17:18:19 -- spdk/autotest.sh@377 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:09:00.353 17:18:19 -- spdk/autotest.sh@379 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:09:00.353 17:18:19 -- spdk/autotest.sh@379 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:09:00.353 17:18:19 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:09:00.353 17:18:19 -- spdk/autotest.sh@383 -- # hostname 00:09:00.353 17:18:19 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -t spdk-wfp-20 -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info 00:09:00.353 geninfo: WARNING: invalid characters removed from testname! 00:09:03.680 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcda 00:09:03.680 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcda 00:09:03.680 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcda 00:09:08.958 17:18:27 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:15.529 17:18:34 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:19.721 17:18:39 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:25.005 17:18:44 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:29.198 17:18:48 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:34.471 17:18:53 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:38.664 17:18:58 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:09:38.664 17:18:58 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:09:38.664 17:18:58 -- common/autotest_common.sh@1690 -- $ lcov --version 00:09:38.664 17:18:58 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:09:38.664 17:18:58 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:09:38.664 17:18:58 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:09:38.664 17:18:58 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:09:38.664 17:18:58 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:09:38.664 17:18:58 -- scripts/common.sh@335 -- $ IFS=.-: 00:09:38.664 17:18:58 -- scripts/common.sh@335 -- $ read -ra ver1 00:09:38.664 17:18:58 -- scripts/common.sh@336 -- $ IFS=.-: 00:09:38.664 17:18:58 -- scripts/common.sh@336 -- $ read -ra ver2 00:09:38.664 17:18:58 -- scripts/common.sh@337 -- $ local 'op=<' 00:09:38.664 17:18:58 -- scripts/common.sh@339 -- $ ver1_l=2 00:09:38.664 17:18:58 -- scripts/common.sh@340 -- $ ver2_l=1 00:09:38.664 17:18:58 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:09:38.664 17:18:58 -- scripts/common.sh@343 -- $ case "$op" in 00:09:38.664 17:18:58 -- scripts/common.sh@344 -- $ : 1 00:09:38.664 17:18:58 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:09:38.664 17:18:58 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.664 17:18:58 -- scripts/common.sh@364 -- $ decimal 1 00:09:38.664 17:18:58 -- scripts/common.sh@352 -- $ local d=1 00:09:38.664 17:18:58 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:09:38.664 17:18:58 -- scripts/common.sh@354 -- $ echo 1 00:09:38.664 17:18:58 -- scripts/common.sh@364 -- $ ver1[v]=1 00:09:38.664 17:18:58 -- scripts/common.sh@365 -- $ decimal 2 00:09:38.664 17:18:58 -- scripts/common.sh@352 -- $ local d=2 00:09:38.664 17:18:58 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:09:38.664 17:18:58 -- scripts/common.sh@354 -- $ echo 2 00:09:38.664 17:18:58 -- scripts/common.sh@365 -- $ ver2[v]=2 00:09:38.664 17:18:58 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:09:38.664 17:18:58 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:09:38.664 17:18:58 -- scripts/common.sh@367 -- $ return 0 00:09:38.664 17:18:58 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.664 17:18:58 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:09:38.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.664 --rc genhtml_branch_coverage=1 00:09:38.664 --rc genhtml_function_coverage=1 00:09:38.664 --rc genhtml_legend=1 00:09:38.664 --rc geninfo_all_blocks=1 00:09:38.664 --rc geninfo_unexecuted_blocks=1 00:09:38.664 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:38.664 ' 00:09:38.664 17:18:58 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:09:38.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.664 --rc genhtml_branch_coverage=1 00:09:38.664 --rc genhtml_function_coverage=1 00:09:38.664 --rc genhtml_legend=1 00:09:38.664 --rc geninfo_all_blocks=1 00:09:38.664 --rc geninfo_unexecuted_blocks=1 00:09:38.664 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:38.664 ' 00:09:38.664 17:18:58 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:09:38.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.664 --rc genhtml_branch_coverage=1 00:09:38.664 --rc genhtml_function_coverage=1 00:09:38.664 --rc genhtml_legend=1 00:09:38.664 --rc geninfo_all_blocks=1 00:09:38.664 --rc geninfo_unexecuted_blocks=1 00:09:38.664 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:38.664 ' 00:09:38.664 17:18:58 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:09:38.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.664 --rc genhtml_branch_coverage=1 00:09:38.664 --rc genhtml_function_coverage=1 00:09:38.664 --rc genhtml_legend=1 00:09:38.664 --rc geninfo_all_blocks=1 00:09:38.664 --rc geninfo_unexecuted_blocks=1 00:09:38.664 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:38.664 ' 00:09:38.664 17:18:58 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:09:38.664 17:18:58 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:09:38.664 17:18:58 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.664 17:18:58 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.664 17:18:58 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.664 17:18:58 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.664 17:18:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.664 17:18:58 -- paths/export.sh@5 -- $ export PATH 00:09:38.664 17:18:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.664 17:18:58 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:09:38.664 17:18:58 -- common/autobuild_common.sh@440 -- $ date +%s 00:09:38.664 17:18:58 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731169138.XXXXXX 00:09:38.664 17:18:58 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731169138.UCAEXl 00:09:38.664 17:18:58 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:09:38.664 17:18:58 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:09:38.664 17:18:58 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:09:38.664 17:18:58 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:09:38.664 17:18:58 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:09:38.664 17:18:58 -- common/autobuild_common.sh@456 -- $ get_config_params 00:09:38.664 17:18:58 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:09:38.664 17:18:58 -- common/autotest_common.sh@10 -- $ set +x 00:09:38.664 17:18:58 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:09:38.664 17:18:58 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:09:38.664 17:18:58 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:38.664 17:18:58 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:09:38.664 17:18:58 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:09:38.664 17:18:58 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:09:38.664 17:18:58 -- spdk/autopackage.sh@19 -- $ timing_finish 00:09:38.664 17:18:58 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:09:38.664 17:18:58 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:09:38.664 17:18:58 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:09:38.664 17:18:58 -- spdk/autopackage.sh@20 -- $ exit 0 00:09:38.664 + [[ -n 248248 ]] 00:09:38.664 + sudo kill 248248 00:09:38.674 [Pipeline] } 00:09:38.691 [Pipeline] // stage 00:09:38.697 [Pipeline] } 00:09:38.712 [Pipeline] // timeout 00:09:38.717 [Pipeline] } 00:09:38.733 [Pipeline] // catchError 00:09:38.738 [Pipeline] } 00:09:38.755 [Pipeline] // wrap 00:09:38.761 [Pipeline] } 00:09:38.775 [Pipeline] // catchError 00:09:38.785 [Pipeline] stage 00:09:38.788 [Pipeline] { (Epilogue) 00:09:38.801 [Pipeline] catchError 00:09:38.803 [Pipeline] { 00:09:38.816 [Pipeline] echo 00:09:38.818 Cleanup processes 00:09:38.824 [Pipeline] sh 00:09:39.110 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:39.110 390047 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:39.124 [Pipeline] sh 00:09:39.412 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:39.412 ++ grep -v 'sudo pgrep' 00:09:39.412 ++ awk '{print $1}' 00:09:39.412 + sudo kill -9 00:09:39.412 + true 00:09:39.424 [Pipeline] sh 00:09:39.709 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:09:39.709 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:09:39.709 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:09:41.087 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:09:51.118 [Pipeline] sh 00:09:51.406 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:09:51.406 Artifacts sizes are good 00:09:51.421 [Pipeline] archiveArtifacts 00:09:51.429 Archiving artifacts 00:09:51.559 [Pipeline] sh 00:09:51.846 + sudo chown -R sys_sgci: /var/jenkins/workspace/short-fuzz-phy-autotest 00:09:51.862 [Pipeline] cleanWs 00:09:51.872 [WS-CLEANUP] Deleting project workspace... 00:09:51.872 [WS-CLEANUP] Deferred wipeout is used... 00:09:51.879 [WS-CLEANUP] done 00:09:51.881 [Pipeline] } 00:09:51.898 [Pipeline] // catchError 00:09:51.910 [Pipeline] sh 00:09:52.195 + logger -p user.info -t JENKINS-CI 00:09:52.204 [Pipeline] } 00:09:52.218 [Pipeline] // stage 00:09:52.223 [Pipeline] } 00:09:52.237 [Pipeline] // node 00:09:52.242 [Pipeline] End of Pipeline 00:09:52.286 Finished: SUCCESS