00:00:00.002 Started by upstream project "autotest-nightly-lts" build number 2239 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3498 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.042 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.044 The recommended git tool is: git 00:00:00.045 using credential 00000000-0000-0000-0000-000000000002 00:00:00.048 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.074 Fetching changes from the remote Git repository 00:00:00.078 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.121 Using shallow fetch with depth 1 00:00:00.121 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.121 > git --version # timeout=10 00:00:00.185 > git --version # 'git version 2.39.2' 00:00:00.185 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.235 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.235 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.826 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.840 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.855 Checking out Revision 53a1a621557260e3fbfd1fd32ee65ff11a804d5b (FETCH_HEAD) 00:00:04.855 > git config core.sparsecheckout # timeout=10 00:00:04.865 > git read-tree -mu HEAD # timeout=10 00:00:04.883 > git checkout -f 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=5 00:00:04.906 Commit message: "packer: Merge irdmafedora into main fedora image" 00:00:04.906 > git rev-list --no-walk 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=10 00:00:04.987 [Pipeline] Start of Pipeline 00:00:05.000 [Pipeline] library 00:00:05.002 Loading library shm_lib@master 00:00:05.002 Library shm_lib@master is cached. Copying from home. 00:00:05.018 [Pipeline] node 00:00:05.035 Running on WFP39 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:05.037 [Pipeline] { 00:00:05.048 [Pipeline] catchError 00:00:05.050 [Pipeline] { 00:00:05.062 [Pipeline] wrap 00:00:05.071 [Pipeline] { 00:00:05.079 [Pipeline] stage 00:00:05.082 [Pipeline] { (Prologue) 00:00:05.290 [Pipeline] sh 00:00:05.575 + logger -p user.info -t JENKINS-CI 00:00:05.592 [Pipeline] echo 00:00:05.594 Node: WFP39 00:00:05.601 [Pipeline] sh 00:00:05.899 [Pipeline] setCustomBuildProperty 00:00:05.908 [Pipeline] echo 00:00:05.910 Cleanup processes 00:00:05.913 [Pipeline] sh 00:00:06.194 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:06.194 586022 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:06.206 [Pipeline] sh 00:00:06.495 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:06.495 ++ grep -v 'sudo pgrep' 00:00:06.495 ++ awk '{print $1}' 00:00:06.495 + sudo kill -9 00:00:06.495 + true 00:00:06.510 [Pipeline] cleanWs 00:00:06.521 [WS-CLEANUP] Deleting project workspace... 00:00:06.521 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.528 [WS-CLEANUP] done 00:00:06.532 [Pipeline] setCustomBuildProperty 00:00:06.572 [Pipeline] sh 00:00:06.860 + sudo git config --global --replace-all safe.directory '*' 00:00:06.947 [Pipeline] httpRequest 00:00:07.276 [Pipeline] echo 00:00:07.277 Sorcerer 10.211.164.101 is alive 00:00:07.283 [Pipeline] retry 00:00:07.285 [Pipeline] { 00:00:07.295 [Pipeline] httpRequest 00:00:07.298 HttpMethod: GET 00:00:07.299 URL: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:07.299 Sending request to url: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:07.302 Response Code: HTTP/1.1 200 OK 00:00:07.303 Success: Status code 200 is in the accepted range: 200,404 00:00:07.303 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:07.593 [Pipeline] } 00:00:07.611 [Pipeline] // retry 00:00:07.619 [Pipeline] sh 00:00:07.905 + tar --no-same-owner -xf jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:07.920 [Pipeline] httpRequest 00:00:08.267 [Pipeline] echo 00:00:08.269 Sorcerer 10.211.164.101 is alive 00:00:08.275 [Pipeline] retry 00:00:08.277 [Pipeline] { 00:00:08.289 [Pipeline] httpRequest 00:00:08.293 HttpMethod: GET 00:00:08.293 URL: http://10.211.164.101/packages/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:00:08.295 Sending request to url: http://10.211.164.101/packages/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:00:08.298 Response Code: HTTP/1.1 200 OK 00:00:08.298 Success: Status code 200 is in the accepted range: 200,404 00:00:08.298 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:00:23.981 [Pipeline] } 00:00:23.998 [Pipeline] // retry 00:00:24.005 [Pipeline] sh 00:00:24.290 + tar --no-same-owner -xf spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:00:26.834 [Pipeline] sh 00:00:27.117 + git -C spdk log --oneline -n5 00:00:27.117 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:27.117 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:27.117 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:27.117 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:27.117 9469ea403 nvme/fio_plugin: add trim support 00:00:27.127 [Pipeline] } 00:00:27.140 [Pipeline] // stage 00:00:27.148 [Pipeline] stage 00:00:27.150 [Pipeline] { (Prepare) 00:00:27.168 [Pipeline] writeFile 00:00:27.185 [Pipeline] sh 00:00:27.468 + logger -p user.info -t JENKINS-CI 00:00:27.480 [Pipeline] sh 00:00:27.765 + logger -p user.info -t JENKINS-CI 00:00:27.777 [Pipeline] sh 00:00:28.063 + cat autorun-spdk.conf 00:00:28.063 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.063 SPDK_TEST_FUZZER_SHORT=1 00:00:28.063 SPDK_TEST_FUZZER=1 00:00:28.063 SPDK_RUN_UBSAN=1 00:00:28.070 RUN_NIGHTLY=1 00:00:28.073 [Pipeline] readFile 00:00:28.092 [Pipeline] withEnv 00:00:28.094 [Pipeline] { 00:00:28.105 [Pipeline] sh 00:00:28.388 + set -ex 00:00:28.388 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:00:28.388 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:28.388 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.388 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:28.388 ++ SPDK_TEST_FUZZER=1 00:00:28.388 ++ SPDK_RUN_UBSAN=1 00:00:28.388 ++ RUN_NIGHTLY=1 00:00:28.388 + case $SPDK_TEST_NVMF_NICS in 00:00:28.388 + DRIVERS= 00:00:28.388 + [[ -n '' ]] 00:00:28.388 + exit 0 00:00:28.397 [Pipeline] } 00:00:28.409 [Pipeline] // withEnv 00:00:28.414 [Pipeline] } 00:00:28.426 [Pipeline] // stage 00:00:28.435 [Pipeline] catchError 00:00:28.437 [Pipeline] { 00:00:28.447 [Pipeline] timeout 00:00:28.447 Timeout set to expire in 30 min 00:00:28.449 [Pipeline] { 00:00:28.460 [Pipeline] stage 00:00:28.462 [Pipeline] { (Tests) 00:00:28.476 [Pipeline] sh 00:00:28.759 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:28.759 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:28.759 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:00:28.759 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:00:28.759 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:28.759 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:28.759 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:00:28.759 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:28.759 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:28.759 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:28.759 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:00:28.759 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:28.759 + source /etc/os-release 00:00:28.759 ++ NAME='Fedora Linux' 00:00:28.759 ++ VERSION='39 (Cloud Edition)' 00:00:28.759 ++ ID=fedora 00:00:28.759 ++ VERSION_ID=39 00:00:28.759 ++ VERSION_CODENAME= 00:00:28.759 ++ PLATFORM_ID=platform:f39 00:00:28.759 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:28.759 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:28.759 ++ LOGO=fedora-logo-icon 00:00:28.759 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:28.759 ++ HOME_URL=https://fedoraproject.org/ 00:00:28.759 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:28.760 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:28.760 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:28.760 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:28.760 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:28.760 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:28.760 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:28.760 ++ SUPPORT_END=2024-11-12 00:00:28.760 ++ VARIANT='Cloud Edition' 00:00:28.760 ++ VARIANT_ID=cloud 00:00:28.760 + uname -a 00:00:28.760 Linux spdk-wfp-39 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:00:28.760 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:00:32.047 Hugepages 00:00:32.047 node hugesize free / total 00:00:32.047 node0 1048576kB 0 / 0 00:00:32.047 node0 2048kB 0 / 0 00:00:32.047 node1 1048576kB 0 / 0 00:00:32.047 node1 2048kB 0 / 0 00:00:32.047 00:00:32.047 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:32.047 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:32.047 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:32.047 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:32.047 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:32.047 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:32.047 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:32.047 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:32.047 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:32.047 NVMe 0000:1a:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:32.047 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:32.047 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:32.047 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:32.047 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:32.047 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:32.047 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:32.047 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:32.047 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:32.047 + rm -f /tmp/spdk-ld-path 00:00:32.047 + source autorun-spdk.conf 00:00:32.047 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.047 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:32.047 ++ SPDK_TEST_FUZZER=1 00:00:32.047 ++ SPDK_RUN_UBSAN=1 00:00:32.047 ++ RUN_NIGHTLY=1 00:00:32.047 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:32.047 + [[ -n '' ]] 00:00:32.047 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:32.047 + for M in /var/spdk/build-*-manifest.txt 00:00:32.047 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:32.047 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:32.047 + for M in /var/spdk/build-*-manifest.txt 00:00:32.047 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:32.047 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:32.047 + for M in /var/spdk/build-*-manifest.txt 00:00:32.047 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:32.047 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:32.047 ++ uname 00:00:32.047 + [[ Linux == \L\i\n\u\x ]] 00:00:32.047 + sudo dmesg -T 00:00:32.047 + sudo dmesg --clear 00:00:32.047 + dmesg_pid=586955 00:00:32.047 + [[ Fedora Linux == FreeBSD ]] 00:00:32.047 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:32.047 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:32.047 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:32.047 + [[ -x /usr/src/fio-static/fio ]] 00:00:32.047 + export FIO_BIN=/usr/src/fio-static/fio 00:00:32.047 + FIO_BIN=/usr/src/fio-static/fio 00:00:32.047 + sudo dmesg -Tw 00:00:32.047 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:32.047 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:32.047 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:32.047 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:32.047 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:32.047 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:32.047 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:32.047 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:32.047 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:32.047 Test configuration: 00:00:32.047 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.047 SPDK_TEST_FUZZER_SHORT=1 00:00:32.047 SPDK_TEST_FUZZER=1 00:00:32.047 SPDK_RUN_UBSAN=1 00:00:32.306 RUN_NIGHTLY=1 14:20:14 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:00:32.306 14:20:14 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:32.306 14:20:14 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:32.306 14:20:14 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:32.307 14:20:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:32.307 14:20:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:32.307 14:20:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:32.307 14:20:14 -- paths/export.sh@5 -- $ export PATH 00:00:32.307 14:20:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:32.307 14:20:14 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:00:32.307 14:20:14 -- common/autobuild_common.sh@440 -- $ date +%s 00:00:32.307 14:20:14 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1727785214.XXXXXX 00:00:32.307 14:20:14 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1727785214.xQgBA0 00:00:32.307 14:20:14 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:00:32.307 14:20:14 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:00:32.307 14:20:14 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:00:32.307 14:20:14 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:32.307 14:20:14 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:32.307 14:20:14 -- common/autobuild_common.sh@456 -- $ get_config_params 00:00:32.307 14:20:14 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:00:32.307 14:20:14 -- common/autotest_common.sh@10 -- $ set +x 00:00:32.307 14:20:14 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:32.307 14:20:14 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:32.307 14:20:14 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:32.307 14:20:14 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:32.307 14:20:14 -- spdk/autobuild.sh@16 -- $ date -u 00:00:32.307 Tue Oct 1 12:20:14 PM UTC 2024 00:00:32.307 14:20:14 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:32.307 LTS-66-g726a04d70 00:00:32.307 14:20:14 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:32.307 14:20:14 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:32.307 14:20:14 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:32.307 14:20:14 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:00:32.307 14:20:14 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:00:32.307 14:20:14 -- common/autotest_common.sh@10 -- $ set +x 00:00:32.307 ************************************ 00:00:32.307 START TEST ubsan 00:00:32.307 ************************************ 00:00:32.307 14:20:14 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:00:32.307 using ubsan 00:00:32.307 00:00:32.307 real 0m0.000s 00:00:32.307 user 0m0.000s 00:00:32.307 sys 0m0.000s 00:00:32.307 14:20:14 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:32.307 14:20:14 -- common/autotest_common.sh@10 -- $ set +x 00:00:32.307 ************************************ 00:00:32.307 END TEST ubsan 00:00:32.307 ************************************ 00:00:32.307 14:20:14 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:32.307 14:20:14 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:32.307 14:20:14 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:32.307 14:20:14 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:00:32.307 14:20:14 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:00:32.307 14:20:14 -- common/autobuild_common.sh@428 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:00:32.307 14:20:14 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:00:32.307 14:20:14 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:00:32.307 14:20:14 -- common/autotest_common.sh@10 -- $ set +x 00:00:32.307 ************************************ 00:00:32.307 START TEST autobuild_llvm_precompile 00:00:32.307 ************************************ 00:00:32.307 14:20:14 -- common/autotest_common.sh@1104 -- $ _llvm_precompile 00:00:32.307 14:20:14 -- common/autobuild_common.sh@32 -- $ clang --version 00:00:32.307 14:20:14 -- common/autobuild_common.sh@32 -- $ [[ clang version 17.0.6 (Fedora 17.0.6-2.fc39) 00:00:32.307 Target: x86_64-redhat-linux-gnu 00:00:32.307 Thread model: posix 00:00:32.307 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:00:32.307 14:20:14 -- common/autobuild_common.sh@33 -- $ clang_num=17 00:00:32.307 14:20:14 -- common/autobuild_common.sh@35 -- $ export CC=clang-17 00:00:32.307 14:20:14 -- common/autobuild_common.sh@35 -- $ CC=clang-17 00:00:32.307 14:20:14 -- common/autobuild_common.sh@36 -- $ export CXX=clang++-17 00:00:32.307 14:20:14 -- common/autobuild_common.sh@36 -- $ CXX=clang++-17 00:00:32.307 14:20:14 -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:00:32.307 14:20:14 -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:00:32.307 14:20:14 -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a ]] 00:00:32.307 14:20:14 -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a' 00:00:32.307 14:20:14 -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:00:32.566 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:00:32.566 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:00:33.133 Using 'verbs' RDMA provider 00:00:48.605 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:00.830 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:01.398 Creating mk/config.mk...done. 00:01:01.398 Creating mk/cc.flags.mk...done. 00:01:01.398 Type 'make' to build. 00:01:01.398 00:01:01.398 real 0m28.904s 00:01:01.398 user 0m12.643s 00:01:01.398 sys 0m15.728s 00:01:01.398 14:20:43 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:01.398 14:20:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.398 ************************************ 00:01:01.398 END TEST autobuild_llvm_precompile 00:01:01.398 ************************************ 00:01:01.398 14:20:43 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:01.398 14:20:43 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:01.398 14:20:43 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:01.398 14:20:43 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:01.399 14:20:43 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:01.658 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:01.658 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:01.917 Using 'verbs' RDMA provider 00:01:14.713 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:26.972 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:26.972 Creating mk/config.mk...done. 00:01:26.972 Creating mk/cc.flags.mk...done. 00:01:26.972 Type 'make' to build. 00:01:26.972 14:21:08 -- spdk/autobuild.sh@69 -- $ run_test make make -j72 00:01:26.972 14:21:08 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:26.972 14:21:08 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:26.972 14:21:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.972 ************************************ 00:01:26.972 START TEST make 00:01:26.972 ************************************ 00:01:26.972 14:21:08 -- common/autotest_common.sh@1104 -- $ make -j72 00:01:26.972 make[1]: Nothing to be done for 'all'. 00:01:28.385 The Meson build system 00:01:28.385 Version: 1.5.0 00:01:28.385 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:01:28.385 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:28.385 Build type: native build 00:01:28.385 Project name: libvfio-user 00:01:28.385 Project version: 0.0.1 00:01:28.385 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:01:28.385 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:01:28.385 Host machine cpu family: x86_64 00:01:28.385 Host machine cpu: x86_64 00:01:28.385 Run-time dependency threads found: YES 00:01:28.385 Library dl found: YES 00:01:28.385 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:28.385 Run-time dependency json-c found: YES 0.17 00:01:28.385 Run-time dependency cmocka found: YES 1.1.7 00:01:28.385 Program pytest-3 found: NO 00:01:28.385 Program flake8 found: NO 00:01:28.385 Program misspell-fixer found: NO 00:01:28.385 Program restructuredtext-lint found: NO 00:01:28.385 Program valgrind found: YES (/usr/bin/valgrind) 00:01:28.385 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:28.385 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:28.385 Compiler for C supports arguments -Wwrite-strings: YES 00:01:28.385 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:28.385 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:28.385 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:28.385 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:28.385 Build targets in project: 8 00:01:28.385 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:28.385 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:28.385 00:01:28.385 libvfio-user 0.0.1 00:01:28.385 00:01:28.385 User defined options 00:01:28.385 buildtype : debug 00:01:28.385 default_library: static 00:01:28.385 libdir : /usr/local/lib 00:01:28.385 00:01:28.385 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:28.645 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:28.646 [1/36] Compiling C object samples/null.p/null.c.o 00:01:28.646 [2/36] Compiling C object samples/lspci.p/lspci.c.o 00:01:28.646 [3/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:28.646 [4/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:28.646 [5/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:01:28.646 [6/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:28.646 [7/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:28.646 [8/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:28.646 [9/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:01:28.646 [10/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:28.646 [11/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:01:28.646 [12/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:28.646 [13/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:28.646 [14/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:01:28.646 [15/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:28.646 [16/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:01:28.646 [17/36] Compiling C object test/unit_tests.p/mocks.c.o 00:01:28.646 [18/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:28.646 [19/36] Compiling C object samples/server.p/server.c.o 00:01:28.646 [20/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:01:28.646 [21/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:28.646 [22/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:28.646 [23/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:01:28.646 [24/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:28.646 [25/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:28.646 [26/36] Compiling C object samples/client.p/client.c.o 00:01:28.646 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:01:28.646 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:28.646 [29/36] Linking static target lib/libvfio-user.a 00:01:28.906 [30/36] Linking target samples/client 00:01:28.906 [31/36] Linking target test/unit_tests 00:01:28.906 [32/36] Linking target samples/shadow_ioeventfd_server 00:01:28.906 [33/36] Linking target samples/server 00:01:28.906 [34/36] Linking target samples/gpio-pci-idio-16 00:01:28.906 [35/36] Linking target samples/lspci 00:01:28.906 [36/36] Linking target samples/null 00:01:28.906 INFO: autodetecting backend as ninja 00:01:28.906 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:28.906 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:29.166 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:29.166 ninja: no work to do. 00:01:35.755 The Meson build system 00:01:35.755 Version: 1.5.0 00:01:35.755 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:01:35.755 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:01:35.755 Build type: native build 00:01:35.755 Program cat found: YES (/usr/bin/cat) 00:01:35.755 Project name: DPDK 00:01:35.755 Project version: 23.11.0 00:01:35.755 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:01:35.755 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:01:35.755 Host machine cpu family: x86_64 00:01:35.755 Host machine cpu: x86_64 00:01:35.755 Message: ## Building in Developer Mode ## 00:01:35.755 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:35.755 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:35.755 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:35.755 Program python3 found: YES (/usr/bin/python3) 00:01:35.755 Program cat found: YES (/usr/bin/cat) 00:01:35.755 Compiler for C supports arguments -march=native: YES 00:01:35.755 Checking for size of "void *" : 8 00:01:35.755 Checking for size of "void *" : 8 (cached) 00:01:35.755 Library m found: YES 00:01:35.755 Library numa found: YES 00:01:35.755 Has header "numaif.h" : YES 00:01:35.755 Library fdt found: NO 00:01:35.755 Library execinfo found: NO 00:01:35.755 Has header "execinfo.h" : YES 00:01:35.755 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:35.755 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:35.755 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:35.755 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:35.755 Run-time dependency openssl found: YES 3.1.1 00:01:35.755 Run-time dependency libpcap found: YES 1.10.4 00:01:35.755 Has header "pcap.h" with dependency libpcap: YES 00:01:35.755 Compiler for C supports arguments -Wcast-qual: YES 00:01:35.755 Compiler for C supports arguments -Wdeprecated: YES 00:01:35.755 Compiler for C supports arguments -Wformat: YES 00:01:35.755 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:35.755 Compiler for C supports arguments -Wformat-security: YES 00:01:35.755 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:35.755 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:35.755 Compiler for C supports arguments -Wnested-externs: YES 00:01:35.755 Compiler for C supports arguments -Wold-style-definition: YES 00:01:35.755 Compiler for C supports arguments -Wpointer-arith: YES 00:01:35.755 Compiler for C supports arguments -Wsign-compare: YES 00:01:35.755 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:35.755 Compiler for C supports arguments -Wundef: YES 00:01:35.755 Compiler for C supports arguments -Wwrite-strings: YES 00:01:35.755 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:35.755 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:01:35.755 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:35.755 Program objdump found: YES (/usr/bin/objdump) 00:01:35.755 Compiler for C supports arguments -mavx512f: YES 00:01:35.755 Checking if "AVX512 checking" compiles: YES 00:01:35.755 Fetching value of define "__SSE4_2__" : 1 00:01:35.755 Fetching value of define "__AES__" : 1 00:01:35.755 Fetching value of define "__AVX__" : 1 00:01:35.755 Fetching value of define "__AVX2__" : 1 00:01:35.755 Fetching value of define "__AVX512BW__" : 1 00:01:35.755 Fetching value of define "__AVX512CD__" : 1 00:01:35.755 Fetching value of define "__AVX512DQ__" : 1 00:01:35.755 Fetching value of define "__AVX512F__" : 1 00:01:35.755 Fetching value of define "__AVX512VL__" : 1 00:01:35.755 Fetching value of define "__PCLMUL__" : 1 00:01:35.755 Fetching value of define "__RDRND__" : 1 00:01:35.755 Fetching value of define "__RDSEED__" : 1 00:01:35.755 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:35.755 Fetching value of define "__znver1__" : (undefined) 00:01:35.755 Fetching value of define "__znver2__" : (undefined) 00:01:35.755 Fetching value of define "__znver3__" : (undefined) 00:01:35.755 Fetching value of define "__znver4__" : (undefined) 00:01:35.755 Compiler for C supports arguments -Wno-format-truncation: NO 00:01:35.755 Message: lib/log: Defining dependency "log" 00:01:35.755 Message: lib/kvargs: Defining dependency "kvargs" 00:01:35.755 Message: lib/telemetry: Defining dependency "telemetry" 00:01:35.755 Checking for function "getentropy" : NO 00:01:35.755 Message: lib/eal: Defining dependency "eal" 00:01:35.755 Message: lib/ring: Defining dependency "ring" 00:01:35.755 Message: lib/rcu: Defining dependency "rcu" 00:01:35.755 Message: lib/mempool: Defining dependency "mempool" 00:01:35.755 Message: lib/mbuf: Defining dependency "mbuf" 00:01:35.755 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:35.755 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:35.755 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:35.755 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:35.755 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:35.755 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:35.755 Compiler for C supports arguments -mpclmul: YES 00:01:35.755 Compiler for C supports arguments -maes: YES 00:01:35.755 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:35.756 Compiler for C supports arguments -mavx512bw: YES 00:01:35.756 Compiler for C supports arguments -mavx512dq: YES 00:01:35.756 Compiler for C supports arguments -mavx512vl: YES 00:01:35.756 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:35.756 Compiler for C supports arguments -mavx2: YES 00:01:35.756 Compiler for C supports arguments -mavx: YES 00:01:35.756 Message: lib/net: Defining dependency "net" 00:01:35.756 Message: lib/meter: Defining dependency "meter" 00:01:35.756 Message: lib/ethdev: Defining dependency "ethdev" 00:01:35.756 Message: lib/pci: Defining dependency "pci" 00:01:35.756 Message: lib/cmdline: Defining dependency "cmdline" 00:01:35.756 Message: lib/hash: Defining dependency "hash" 00:01:35.756 Message: lib/timer: Defining dependency "timer" 00:01:35.756 Message: lib/compressdev: Defining dependency "compressdev" 00:01:35.756 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:35.756 Message: lib/dmadev: Defining dependency "dmadev" 00:01:35.756 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:35.756 Message: lib/power: Defining dependency "power" 00:01:35.756 Message: lib/reorder: Defining dependency "reorder" 00:01:35.756 Message: lib/security: Defining dependency "security" 00:01:35.756 Has header "linux/userfaultfd.h" : YES 00:01:35.756 Has header "linux/vduse.h" : YES 00:01:35.756 Message: lib/vhost: Defining dependency "vhost" 00:01:35.756 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:01:35.756 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:35.756 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:35.756 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:35.756 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:35.756 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:35.756 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:35.756 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:35.756 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:35.756 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:35.756 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:35.756 Configuring doxy-api-html.conf using configuration 00:01:35.756 Configuring doxy-api-man.conf using configuration 00:01:35.756 Program mandb found: YES (/usr/bin/mandb) 00:01:35.756 Program sphinx-build found: NO 00:01:35.756 Configuring rte_build_config.h using configuration 00:01:35.756 Message: 00:01:35.756 ================= 00:01:35.756 Applications Enabled 00:01:35.756 ================= 00:01:35.756 00:01:35.756 apps: 00:01:35.756 00:01:35.756 00:01:35.756 Message: 00:01:35.756 ================= 00:01:35.756 Libraries Enabled 00:01:35.756 ================= 00:01:35.756 00:01:35.756 libs: 00:01:35.756 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:35.756 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:35.756 cryptodev, dmadev, power, reorder, security, vhost, 00:01:35.756 00:01:35.756 Message: 00:01:35.756 =============== 00:01:35.756 Drivers Enabled 00:01:35.756 =============== 00:01:35.756 00:01:35.756 common: 00:01:35.756 00:01:35.756 bus: 00:01:35.756 pci, vdev, 00:01:35.756 mempool: 00:01:35.756 ring, 00:01:35.756 dma: 00:01:35.756 00:01:35.756 net: 00:01:35.756 00:01:35.756 crypto: 00:01:35.756 00:01:35.756 compress: 00:01:35.756 00:01:35.756 vdpa: 00:01:35.756 00:01:35.756 00:01:35.756 Message: 00:01:35.756 ================= 00:01:35.756 Content Skipped 00:01:35.756 ================= 00:01:35.756 00:01:35.756 apps: 00:01:35.756 dumpcap: explicitly disabled via build config 00:01:35.756 graph: explicitly disabled via build config 00:01:35.756 pdump: explicitly disabled via build config 00:01:35.756 proc-info: explicitly disabled via build config 00:01:35.756 test-acl: explicitly disabled via build config 00:01:35.756 test-bbdev: explicitly disabled via build config 00:01:35.756 test-cmdline: explicitly disabled via build config 00:01:35.756 test-compress-perf: explicitly disabled via build config 00:01:35.756 test-crypto-perf: explicitly disabled via build config 00:01:35.756 test-dma-perf: explicitly disabled via build config 00:01:35.756 test-eventdev: explicitly disabled via build config 00:01:35.756 test-fib: explicitly disabled via build config 00:01:35.756 test-flow-perf: explicitly disabled via build config 00:01:35.756 test-gpudev: explicitly disabled via build config 00:01:35.756 test-mldev: explicitly disabled via build config 00:01:35.756 test-pipeline: explicitly disabled via build config 00:01:35.756 test-pmd: explicitly disabled via build config 00:01:35.756 test-regex: explicitly disabled via build config 00:01:35.756 test-sad: explicitly disabled via build config 00:01:35.756 test-security-perf: explicitly disabled via build config 00:01:35.756 00:01:35.756 libs: 00:01:35.756 metrics: explicitly disabled via build config 00:01:35.756 acl: explicitly disabled via build config 00:01:35.756 bbdev: explicitly disabled via build config 00:01:35.756 bitratestats: explicitly disabled via build config 00:01:35.756 bpf: explicitly disabled via build config 00:01:35.756 cfgfile: explicitly disabled via build config 00:01:35.756 distributor: explicitly disabled via build config 00:01:35.756 efd: explicitly disabled via build config 00:01:35.756 eventdev: explicitly disabled via build config 00:01:35.756 dispatcher: explicitly disabled via build config 00:01:35.756 gpudev: explicitly disabled via build config 00:01:35.756 gro: explicitly disabled via build config 00:01:35.756 gso: explicitly disabled via build config 00:01:35.756 ip_frag: explicitly disabled via build config 00:01:35.756 jobstats: explicitly disabled via build config 00:01:35.756 latencystats: explicitly disabled via build config 00:01:35.756 lpm: explicitly disabled via build config 00:01:35.756 member: explicitly disabled via build config 00:01:35.756 pcapng: explicitly disabled via build config 00:01:35.756 rawdev: explicitly disabled via build config 00:01:35.756 regexdev: explicitly disabled via build config 00:01:35.756 mldev: explicitly disabled via build config 00:01:35.756 rib: explicitly disabled via build config 00:01:35.756 sched: explicitly disabled via build config 00:01:35.756 stack: explicitly disabled via build config 00:01:35.756 ipsec: explicitly disabled via build config 00:01:35.756 pdcp: explicitly disabled via build config 00:01:35.756 fib: explicitly disabled via build config 00:01:35.756 port: explicitly disabled via build config 00:01:35.756 pdump: explicitly disabled via build config 00:01:35.756 table: explicitly disabled via build config 00:01:35.756 pipeline: explicitly disabled via build config 00:01:35.756 graph: explicitly disabled via build config 00:01:35.756 node: explicitly disabled via build config 00:01:35.756 00:01:35.756 drivers: 00:01:35.756 common/cpt: not in enabled drivers build config 00:01:35.756 common/dpaax: not in enabled drivers build config 00:01:35.756 common/iavf: not in enabled drivers build config 00:01:35.756 common/idpf: not in enabled drivers build config 00:01:35.756 common/mvep: not in enabled drivers build config 00:01:35.756 common/octeontx: not in enabled drivers build config 00:01:35.756 bus/auxiliary: not in enabled drivers build config 00:01:35.756 bus/cdx: not in enabled drivers build config 00:01:35.756 bus/dpaa: not in enabled drivers build config 00:01:35.756 bus/fslmc: not in enabled drivers build config 00:01:35.756 bus/ifpga: not in enabled drivers build config 00:01:35.756 bus/platform: not in enabled drivers build config 00:01:35.756 bus/vmbus: not in enabled drivers build config 00:01:35.756 common/cnxk: not in enabled drivers build config 00:01:35.756 common/mlx5: not in enabled drivers build config 00:01:35.756 common/nfp: not in enabled drivers build config 00:01:35.756 common/qat: not in enabled drivers build config 00:01:35.756 common/sfc_efx: not in enabled drivers build config 00:01:35.756 mempool/bucket: not in enabled drivers build config 00:01:35.756 mempool/cnxk: not in enabled drivers build config 00:01:35.756 mempool/dpaa: not in enabled drivers build config 00:01:35.756 mempool/dpaa2: not in enabled drivers build config 00:01:35.756 mempool/octeontx: not in enabled drivers build config 00:01:35.756 mempool/stack: not in enabled drivers build config 00:01:35.756 dma/cnxk: not in enabled drivers build config 00:01:35.756 dma/dpaa: not in enabled drivers build config 00:01:35.756 dma/dpaa2: not in enabled drivers build config 00:01:35.756 dma/hisilicon: not in enabled drivers build config 00:01:35.756 dma/idxd: not in enabled drivers build config 00:01:35.756 dma/ioat: not in enabled drivers build config 00:01:35.756 dma/skeleton: not in enabled drivers build config 00:01:35.756 net/af_packet: not in enabled drivers build config 00:01:35.756 net/af_xdp: not in enabled drivers build config 00:01:35.756 net/ark: not in enabled drivers build config 00:01:35.756 net/atlantic: not in enabled drivers build config 00:01:35.756 net/avp: not in enabled drivers build config 00:01:35.756 net/axgbe: not in enabled drivers build config 00:01:35.756 net/bnx2x: not in enabled drivers build config 00:01:35.756 net/bnxt: not in enabled drivers build config 00:01:35.756 net/bonding: not in enabled drivers build config 00:01:35.756 net/cnxk: not in enabled drivers build config 00:01:35.756 net/cpfl: not in enabled drivers build config 00:01:35.756 net/cxgbe: not in enabled drivers build config 00:01:35.756 net/dpaa: not in enabled drivers build config 00:01:35.756 net/dpaa2: not in enabled drivers build config 00:01:35.756 net/e1000: not in enabled drivers build config 00:01:35.756 net/ena: not in enabled drivers build config 00:01:35.756 net/enetc: not in enabled drivers build config 00:01:35.756 net/enetfec: not in enabled drivers build config 00:01:35.756 net/enic: not in enabled drivers build config 00:01:35.756 net/failsafe: not in enabled drivers build config 00:01:35.756 net/fm10k: not in enabled drivers build config 00:01:35.756 net/gve: not in enabled drivers build config 00:01:35.756 net/hinic: not in enabled drivers build config 00:01:35.756 net/hns3: not in enabled drivers build config 00:01:35.756 net/i40e: not in enabled drivers build config 00:01:35.756 net/iavf: not in enabled drivers build config 00:01:35.756 net/ice: not in enabled drivers build config 00:01:35.756 net/idpf: not in enabled drivers build config 00:01:35.756 net/igc: not in enabled drivers build config 00:01:35.756 net/ionic: not in enabled drivers build config 00:01:35.756 net/ipn3ke: not in enabled drivers build config 00:01:35.756 net/ixgbe: not in enabled drivers build config 00:01:35.756 net/mana: not in enabled drivers build config 00:01:35.756 net/memif: not in enabled drivers build config 00:01:35.756 net/mlx4: not in enabled drivers build config 00:01:35.756 net/mlx5: not in enabled drivers build config 00:01:35.756 net/mvneta: not in enabled drivers build config 00:01:35.756 net/mvpp2: not in enabled drivers build config 00:01:35.756 net/netvsc: not in enabled drivers build config 00:01:35.756 net/nfb: not in enabled drivers build config 00:01:35.756 net/nfp: not in enabled drivers build config 00:01:35.756 net/ngbe: not in enabled drivers build config 00:01:35.756 net/null: not in enabled drivers build config 00:01:35.756 net/octeontx: not in enabled drivers build config 00:01:35.756 net/octeon_ep: not in enabled drivers build config 00:01:35.756 net/pcap: not in enabled drivers build config 00:01:35.756 net/pfe: not in enabled drivers build config 00:01:35.756 net/qede: not in enabled drivers build config 00:01:35.756 net/ring: not in enabled drivers build config 00:01:35.756 net/sfc: not in enabled drivers build config 00:01:35.756 net/softnic: not in enabled drivers build config 00:01:35.756 net/tap: not in enabled drivers build config 00:01:35.756 net/thunderx: not in enabled drivers build config 00:01:35.756 net/txgbe: not in enabled drivers build config 00:01:35.756 net/vdev_netvsc: not in enabled drivers build config 00:01:35.756 net/vhost: not in enabled drivers build config 00:01:35.756 net/virtio: not in enabled drivers build config 00:01:35.756 net/vmxnet3: not in enabled drivers build config 00:01:35.756 raw/*: missing internal dependency, "rawdev" 00:01:35.756 crypto/armv8: not in enabled drivers build config 00:01:35.756 crypto/bcmfs: not in enabled drivers build config 00:01:35.756 crypto/caam_jr: not in enabled drivers build config 00:01:35.756 crypto/ccp: not in enabled drivers build config 00:01:35.756 crypto/cnxk: not in enabled drivers build config 00:01:35.756 crypto/dpaa_sec: not in enabled drivers build config 00:01:35.756 crypto/dpaa2_sec: not in enabled drivers build config 00:01:35.756 crypto/ipsec_mb: not in enabled drivers build config 00:01:35.756 crypto/mlx5: not in enabled drivers build config 00:01:35.756 crypto/mvsam: not in enabled drivers build config 00:01:35.756 crypto/nitrox: not in enabled drivers build config 00:01:35.756 crypto/null: not in enabled drivers build config 00:01:35.756 crypto/octeontx: not in enabled drivers build config 00:01:35.756 crypto/openssl: not in enabled drivers build config 00:01:35.756 crypto/scheduler: not in enabled drivers build config 00:01:35.756 crypto/uadk: not in enabled drivers build config 00:01:35.756 crypto/virtio: not in enabled drivers build config 00:01:35.756 compress/isal: not in enabled drivers build config 00:01:35.756 compress/mlx5: not in enabled drivers build config 00:01:35.756 compress/octeontx: not in enabled drivers build config 00:01:35.756 compress/zlib: not in enabled drivers build config 00:01:35.756 regex/*: missing internal dependency, "regexdev" 00:01:35.756 ml/*: missing internal dependency, "mldev" 00:01:35.756 vdpa/ifc: not in enabled drivers build config 00:01:35.756 vdpa/mlx5: not in enabled drivers build config 00:01:35.756 vdpa/nfp: not in enabled drivers build config 00:01:35.756 vdpa/sfc: not in enabled drivers build config 00:01:35.756 event/*: missing internal dependency, "eventdev" 00:01:35.756 baseband/*: missing internal dependency, "bbdev" 00:01:35.756 gpu/*: missing internal dependency, "gpudev" 00:01:35.756 00:01:35.756 00:01:35.756 Build targets in project: 85 00:01:35.756 00:01:35.756 DPDK 23.11.0 00:01:35.756 00:01:35.756 User defined options 00:01:35.756 buildtype : debug 00:01:35.756 default_library : static 00:01:35.756 libdir : lib 00:01:35.756 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:35.756 c_args : -fPIC -Werror 00:01:35.756 c_link_args : 00:01:35.756 cpu_instruction_set: native 00:01:35.756 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:35.756 disable_libs : bbdev,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:35.756 enable_docs : false 00:01:35.756 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:35.756 enable_kmods : false 00:01:35.756 tests : false 00:01:35.756 00:01:35.756 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:35.756 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:01:35.756 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:35.756 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:35.756 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:35.756 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:35.756 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:35.756 [6/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:35.756 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:35.756 [8/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:35.756 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:35.756 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:35.756 [11/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:35.756 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:35.756 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:35.756 [14/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:35.756 [15/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:35.756 [16/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:35.756 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:35.756 [18/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:35.757 [19/265] Linking static target lib/librte_kvargs.a 00:01:35.757 [20/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:35.757 [21/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:35.757 [22/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:35.757 [23/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:35.757 [24/265] Linking static target lib/librte_log.a 00:01:35.757 [25/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:35.757 [26/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.757 [27/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:35.757 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:36.015 [29/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:36.015 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:36.015 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:36.015 [32/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:36.015 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:36.015 [34/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:36.015 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:36.015 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:36.015 [37/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:36.015 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:36.015 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:36.015 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:36.015 [41/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:36.015 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:36.015 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:36.015 [44/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:36.015 [45/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:36.015 [46/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:36.015 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:36.015 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:36.015 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:36.015 [50/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:36.015 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:36.015 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:36.015 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:36.015 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:36.015 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:36.015 [56/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:36.015 [57/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:36.015 [58/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:36.015 [59/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:36.015 [60/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:36.015 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:36.015 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:36.015 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:36.015 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:36.015 [65/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:36.015 [66/265] Linking static target lib/librte_telemetry.a 00:01:36.015 [67/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:36.015 [68/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:36.015 [69/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:36.015 [70/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:36.015 [71/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:36.015 [72/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:36.015 [73/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:36.015 [74/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:36.015 [75/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:36.015 [76/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:36.015 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:36.015 [78/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:36.015 [79/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:36.015 [80/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:36.015 [81/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:36.015 [82/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:36.015 [83/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:36.015 [84/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:36.015 [85/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:36.015 [86/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:36.015 [87/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:36.015 [88/265] Linking static target lib/librte_pci.a 00:01:36.015 [89/265] Linking static target lib/librte_ring.a 00:01:36.015 [90/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:36.015 [91/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:36.015 [92/265] Linking static target lib/librte_meter.a 00:01:36.015 [93/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:36.015 [94/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:36.015 [95/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:36.015 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:36.015 [97/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:36.015 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:36.015 [99/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:36.015 [100/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:36.015 [101/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:36.015 [102/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:36.015 [103/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:36.015 [104/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.015 [105/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:36.015 [106/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:36.015 [107/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:36.015 [108/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:36.015 [109/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:36.015 [110/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:36.015 [111/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:36.015 [112/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:36.015 [113/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:36.015 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:36.015 [115/265] Linking static target lib/librte_net.a 00:01:36.015 [116/265] Linking target lib/librte_log.so.24.0 00:01:36.015 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:36.015 [118/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:36.015 [119/265] Linking static target lib/librte_eal.a 00:01:36.015 [120/265] Linking static target lib/librte_rcu.a 00:01:36.273 [121/265] Linking static target lib/librte_mempool.a 00:01:36.274 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:36.274 [123/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:36.274 [124/265] Linking static target lib/librte_mbuf.a 00:01:36.274 [125/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.274 [126/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:36.274 [127/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.274 [128/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.274 [129/265] Linking target lib/librte_kvargs.so.24.0 00:01:36.534 [130/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.534 [131/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.534 [132/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.534 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:36.534 [134/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:36.534 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:36.534 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:36.534 [137/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:36.534 [138/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:36.534 [139/265] Linking target lib/librte_telemetry.so.24.0 00:01:36.534 [140/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:36.534 [141/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:36.534 [142/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:36.534 [143/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:36.534 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:36.534 [145/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:36.534 [146/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:36.534 [147/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:36.534 [148/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:36.534 [149/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:36.534 [150/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:36.534 [151/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:36.534 [152/265] Linking static target lib/librte_timer.a 00:01:36.534 [153/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:36.534 [154/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:36.534 [155/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:36.534 [156/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:36.534 [157/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:36.534 [158/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:36.534 [159/265] Linking static target lib/librte_cmdline.a 00:01:36.534 [160/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:36.534 [161/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:36.534 [162/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:36.534 [163/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:36.534 [164/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:36.534 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:36.534 [166/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:36.534 [167/265] Linking static target lib/librte_dmadev.a 00:01:36.534 [168/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:36.534 [169/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:36.534 [170/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:36.534 [171/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:36.534 [172/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:36.534 [173/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:36.793 [174/265] Linking static target lib/librte_compressdev.a 00:01:36.793 [175/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:36.793 [176/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:36.793 [177/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:36.793 [178/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:36.793 [179/265] Linking static target lib/librte_hash.a 00:01:36.793 [180/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:36.793 [181/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:36.793 [182/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:36.793 [183/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:36.793 [184/265] Linking static target lib/librte_reorder.a 00:01:36.793 [185/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:36.793 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:36.793 [187/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:36.793 [188/265] Linking static target lib/librte_security.a 00:01:36.793 [189/265] Linking static target lib/librte_power.a 00:01:36.793 [190/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:36.793 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:36.793 [192/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:36.793 [193/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:36.793 [194/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:36.793 [195/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:36.793 [196/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:36.793 [197/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:36.793 [198/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:36.793 [199/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:36.793 [200/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:36.793 [201/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:37.053 [202/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.053 [203/265] Linking static target drivers/librte_bus_pci.a 00:01:37.053 [204/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:37.053 [205/265] Linking static target drivers/librte_bus_vdev.a 00:01:37.053 [206/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.053 [207/265] Linking static target lib/librte_cryptodev.a 00:01:37.053 [208/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:37.053 [209/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:37.053 [210/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:37.053 [211/265] Linking static target drivers/librte_mempool_ring.a 00:01:37.053 [212/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.053 [213/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:37.312 [214/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.312 [215/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.312 [216/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:37.312 [217/265] Linking static target lib/librte_ethdev.a 00:01:37.312 [218/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.312 [219/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.572 [220/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.572 [221/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.572 [222/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:37.572 [223/265] Linking static target lib/librte_vhost.a 00:01:37.832 [224/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.832 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.092 [226/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.477 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.049 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.206 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.271 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.271 [231/265] Linking target lib/librte_eal.so.24.0 00:01:49.271 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:49.530 [233/265] Linking target lib/librte_meter.so.24.0 00:01:49.530 [234/265] Linking target lib/librte_ring.so.24.0 00:01:49.530 [235/265] Linking target lib/librte_timer.so.24.0 00:01:49.530 [236/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:49.530 [237/265] Linking target lib/librte_dmadev.so.24.0 00:01:49.530 [238/265] Linking target lib/librte_pci.so.24.0 00:01:49.530 [239/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:49.530 [240/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:49.530 [241/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:49.530 [242/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:49.530 [243/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:49.530 [244/265] Linking target lib/librte_rcu.so.24.0 00:01:49.530 [245/265] Linking target lib/librte_mempool.so.24.0 00:01:49.530 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:49.789 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:49.789 [248/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:49.789 [249/265] Linking target lib/librte_mbuf.so.24.0 00:01:49.789 [250/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:50.049 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:50.049 [252/265] Linking target lib/librte_net.so.24.0 00:01:50.049 [253/265] Linking target lib/librte_reorder.so.24.0 00:01:50.049 [254/265] Linking target lib/librte_compressdev.so.24.0 00:01:50.049 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:01:50.308 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:50.308 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:50.308 [258/265] Linking target lib/librte_hash.so.24.0 00:01:50.308 [259/265] Linking target lib/librte_cmdline.so.24.0 00:01:50.308 [260/265] Linking target lib/librte_security.so.24.0 00:01:50.308 [261/265] Linking target lib/librte_ethdev.so.24.0 00:01:50.308 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:50.308 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:50.567 [264/265] Linking target lib/librte_power.so.24.0 00:01:50.567 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:50.567 INFO: autodetecting backend as ninja 00:01:50.567 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 72 00:01:51.503 CC lib/ut_mock/mock.o 00:01:51.503 CC lib/log/log.o 00:01:51.503 CC lib/log/log_flags.o 00:01:51.503 CC lib/log/log_deprecated.o 00:01:51.503 CC lib/ut/ut.o 00:01:51.503 LIB libspdk_ut_mock.a 00:01:51.762 LIB libspdk_log.a 00:01:51.762 LIB libspdk_ut.a 00:01:52.021 CC lib/util/base64.o 00:01:52.021 CC lib/util/bit_array.o 00:01:52.021 CC lib/util/cpuset.o 00:01:52.021 CC lib/util/crc16.o 00:01:52.021 CC lib/util/crc32.o 00:01:52.021 CC lib/ioat/ioat.o 00:01:52.021 CC lib/util/crc32c.o 00:01:52.021 CC lib/util/crc32_ieee.o 00:01:52.021 CC lib/util/crc64.o 00:01:52.021 CC lib/util/dif.o 00:01:52.021 CC lib/dma/dma.o 00:01:52.021 CC lib/util/fd.o 00:01:52.021 CC lib/util/file.o 00:01:52.021 CC lib/util/hexlify.o 00:01:52.021 CXX lib/trace_parser/trace.o 00:01:52.021 CC lib/util/iov.o 00:01:52.021 CC lib/util/math.o 00:01:52.021 CC lib/util/pipe.o 00:01:52.021 CC lib/util/string.o 00:01:52.021 CC lib/util/strerror_tls.o 00:01:52.021 CC lib/util/uuid.o 00:01:52.021 CC lib/util/fd_group.o 00:01:52.021 CC lib/util/xor.o 00:01:52.021 CC lib/util/zipf.o 00:01:52.021 CC lib/vfio_user/host/vfio_user_pci.o 00:01:52.021 CC lib/vfio_user/host/vfio_user.o 00:01:52.021 LIB libspdk_dma.a 00:01:52.021 LIB libspdk_ioat.a 00:01:52.280 LIB libspdk_vfio_user.a 00:01:52.280 LIB libspdk_util.a 00:01:52.538 LIB libspdk_trace_parser.a 00:01:52.538 CC lib/json/json_parse.o 00:01:52.538 CC lib/json/json_util.o 00:01:52.538 CC lib/json/json_write.o 00:01:52.538 CC lib/conf/conf.o 00:01:52.538 CC lib/rdma/common.o 00:01:52.538 CC lib/vmd/vmd.o 00:01:52.538 CC lib/rdma/rdma_verbs.o 00:01:52.538 CC lib/vmd/led.o 00:01:52.538 CC lib/env_dpdk/env.o 00:01:52.538 CC lib/env_dpdk/memory.o 00:01:52.538 CC lib/env_dpdk/pci.o 00:01:52.538 CC lib/idxd/idxd.o 00:01:52.538 CC lib/env_dpdk/init.o 00:01:52.538 CC lib/env_dpdk/threads.o 00:01:52.538 CC lib/idxd/idxd_user.o 00:01:52.538 CC lib/idxd/idxd_kernel.o 00:01:52.538 CC lib/env_dpdk/pci_ioat.o 00:01:52.538 CC lib/env_dpdk/pci_virtio.o 00:01:52.538 CC lib/env_dpdk/pci_vmd.o 00:01:52.538 CC lib/env_dpdk/pci_idxd.o 00:01:52.538 CC lib/env_dpdk/pci_event.o 00:01:52.538 CC lib/env_dpdk/sigbus_handler.o 00:01:52.538 CC lib/env_dpdk/pci_dpdk.o 00:01:52.538 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:52.538 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:52.796 LIB libspdk_conf.a 00:01:52.796 LIB libspdk_json.a 00:01:52.796 LIB libspdk_rdma.a 00:01:53.056 LIB libspdk_idxd.a 00:01:53.056 LIB libspdk_vmd.a 00:01:53.056 CC lib/jsonrpc/jsonrpc_server.o 00:01:53.056 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:53.056 CC lib/jsonrpc/jsonrpc_client.o 00:01:53.056 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:53.314 LIB libspdk_jsonrpc.a 00:01:53.576 LIB libspdk_env_dpdk.a 00:01:53.576 CC lib/rpc/rpc.o 00:01:53.841 LIB libspdk_rpc.a 00:01:54.100 CC lib/notify/notify.o 00:01:54.100 CC lib/notify/notify_rpc.o 00:01:54.100 CC lib/trace/trace.o 00:01:54.100 CC lib/trace/trace_flags.o 00:01:54.100 CC lib/trace/trace_rpc.o 00:01:54.100 CC lib/sock/sock.o 00:01:54.100 CC lib/sock/sock_rpc.o 00:01:54.100 LIB libspdk_notify.a 00:01:54.100 LIB libspdk_trace.a 00:01:54.360 LIB libspdk_sock.a 00:01:54.619 CC lib/thread/thread.o 00:01:54.619 CC lib/thread/iobuf.o 00:01:54.619 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:54.619 CC lib/nvme/nvme_ctrlr.o 00:01:54.619 CC lib/nvme/nvme_fabric.o 00:01:54.619 CC lib/nvme/nvme_ns_cmd.o 00:01:54.619 CC lib/nvme/nvme_ns.o 00:01:54.619 CC lib/nvme/nvme_pcie_common.o 00:01:54.619 CC lib/nvme/nvme_pcie.o 00:01:54.619 CC lib/nvme/nvme_qpair.o 00:01:54.619 CC lib/nvme/nvme.o 00:01:54.619 CC lib/nvme/nvme_quirks.o 00:01:54.619 CC lib/nvme/nvme_transport.o 00:01:54.619 CC lib/nvme/nvme_discovery.o 00:01:54.619 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:54.619 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:54.619 CC lib/nvme/nvme_tcp.o 00:01:54.619 CC lib/nvme/nvme_opal.o 00:01:54.619 CC lib/nvme/nvme_io_msg.o 00:01:54.619 CC lib/nvme/nvme_poll_group.o 00:01:54.619 CC lib/nvme/nvme_zns.o 00:01:54.619 CC lib/nvme/nvme_cuse.o 00:01:54.619 CC lib/nvme/nvme_vfio_user.o 00:01:54.619 CC lib/nvme/nvme_rdma.o 00:01:55.555 LIB libspdk_thread.a 00:01:55.555 CC lib/blob/blobstore.o 00:01:55.555 CC lib/blob/request.o 00:01:55.555 CC lib/blob/zeroes.o 00:01:55.555 CC lib/blob/blob_bs_dev.o 00:01:55.556 CC lib/virtio/virtio_vhost_user.o 00:01:55.556 CC lib/virtio/virtio.o 00:01:55.556 CC lib/virtio/virtio_vfio_user.o 00:01:55.556 CC lib/accel/accel.o 00:01:55.556 CC lib/virtio/virtio_pci.o 00:01:55.556 CC lib/init/json_config.o 00:01:55.556 CC lib/init/subsystem_rpc.o 00:01:55.556 CC lib/init/subsystem.o 00:01:55.556 CC lib/accel/accel_rpc.o 00:01:55.556 CC lib/accel/accel_sw.o 00:01:55.556 CC lib/init/rpc.o 00:01:55.556 CC lib/vfu_tgt/tgt_endpoint.o 00:01:55.556 CC lib/vfu_tgt/tgt_rpc.o 00:01:55.816 LIB libspdk_init.a 00:01:55.816 LIB libspdk_virtio.a 00:01:55.816 LIB libspdk_vfu_tgt.a 00:01:56.075 LIB libspdk_nvme.a 00:01:56.075 CC lib/event/reactor.o 00:01:56.075 CC lib/event/app.o 00:01:56.075 CC lib/event/log_rpc.o 00:01:56.075 CC lib/event/app_rpc.o 00:01:56.075 CC lib/event/scheduler_static.o 00:01:56.335 LIB libspdk_accel.a 00:01:56.335 LIB libspdk_event.a 00:01:56.594 CC lib/bdev/bdev_rpc.o 00:01:56.594 CC lib/bdev/bdev.o 00:01:56.594 CC lib/bdev/bdev_zone.o 00:01:56.594 CC lib/bdev/part.o 00:01:56.594 CC lib/bdev/scsi_nvme.o 00:01:57.163 LIB libspdk_blob.a 00:01:57.423 CC lib/blobfs/blobfs.o 00:01:57.423 CC lib/blobfs/tree.o 00:01:57.423 CC lib/lvol/lvol.o 00:01:57.991 LIB libspdk_lvol.a 00:01:57.991 LIB libspdk_blobfs.a 00:01:58.251 LIB libspdk_bdev.a 00:01:58.511 CC lib/nvmf/ctrlr.o 00:01:58.511 CC lib/nvmf/ctrlr_discovery.o 00:01:58.511 CC lib/nvmf/ctrlr_bdev.o 00:01:58.511 CC lib/nvmf/subsystem.o 00:01:58.511 CC lib/nvmf/nvmf.o 00:01:58.511 CC lib/ftl/ftl_core.o 00:01:58.511 CC lib/ftl/ftl_init.o 00:01:58.511 CC lib/nvmf/nvmf_rpc.o 00:01:58.511 CC lib/ftl/ftl_layout.o 00:01:58.511 CC lib/nvmf/transport.o 00:01:58.511 CC lib/ftl/ftl_debug.o 00:01:58.511 CC lib/nvmf/tcp.o 00:01:58.511 CC lib/ftl/ftl_io.o 00:01:58.511 CC lib/ftl/ftl_sb.o 00:01:58.511 CC lib/nvmf/vfio_user.o 00:01:58.511 CC lib/ftl/ftl_l2p.o 00:01:58.511 CC lib/nvmf/rdma.o 00:01:58.511 CC lib/ftl/ftl_l2p_flat.o 00:01:58.511 CC lib/nbd/nbd.o 00:01:58.511 CC lib/ftl/ftl_band.o 00:01:58.511 CC lib/ftl/ftl_nv_cache.o 00:01:58.511 CC lib/nbd/nbd_rpc.o 00:01:58.511 CC lib/ftl/ftl_band_ops.o 00:01:58.511 CC lib/ftl/ftl_writer.o 00:01:58.511 CC lib/ftl/ftl_rq.o 00:01:58.511 CC lib/ftl/ftl_reloc.o 00:01:58.511 CC lib/ftl/ftl_l2p_cache.o 00:01:58.511 CC lib/ublk/ublk.o 00:01:58.511 CC lib/ftl/ftl_p2l.o 00:01:58.772 CC lib/ublk/ublk_rpc.o 00:01:58.772 CC lib/ftl/mngt/ftl_mngt.o 00:01:58.772 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:58.772 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:58.772 CC lib/scsi/dev.o 00:01:58.772 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:58.772 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:58.772 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:58.772 CC lib/scsi/lun.o 00:01:58.772 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:58.772 CC lib/scsi/port.o 00:01:58.772 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:58.772 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:58.772 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:58.772 CC lib/scsi/scsi.o 00:01:58.772 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:58.772 CC lib/scsi/scsi_bdev.o 00:01:58.772 CC lib/scsi/scsi_pr.o 00:01:58.772 CC lib/scsi/scsi_rpc.o 00:01:58.772 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:58.772 CC lib/scsi/task.o 00:01:58.772 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:58.772 CC lib/ftl/utils/ftl_conf.o 00:01:58.772 CC lib/ftl/utils/ftl_md.o 00:01:58.772 CC lib/ftl/utils/ftl_mempool.o 00:01:58.772 CC lib/ftl/utils/ftl_bitmap.o 00:01:58.772 CC lib/ftl/utils/ftl_property.o 00:01:58.772 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:58.772 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:58.772 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:58.772 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:58.772 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:58.772 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:58.772 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:58.772 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:58.772 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:58.772 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:58.772 CC lib/ftl/base/ftl_base_dev.o 00:01:58.772 CC lib/ftl/base/ftl_base_bdev.o 00:01:58.772 CC lib/ftl/ftl_trace.o 00:01:59.031 LIB libspdk_nbd.a 00:01:59.031 LIB libspdk_scsi.a 00:01:59.290 LIB libspdk_ublk.a 00:01:59.290 LIB libspdk_ftl.a 00:01:59.290 CC lib/vhost/vhost.o 00:01:59.290 CC lib/iscsi/conn.o 00:01:59.290 CC lib/vhost/vhost_rpc.o 00:01:59.549 CC lib/vhost/vhost_scsi.o 00:01:59.549 CC lib/iscsi/init_grp.o 00:01:59.549 CC lib/iscsi/iscsi.o 00:01:59.549 CC lib/vhost/vhost_blk.o 00:01:59.549 CC lib/vhost/rte_vhost_user.o 00:01:59.549 CC lib/iscsi/md5.o 00:01:59.549 CC lib/iscsi/param.o 00:01:59.549 CC lib/iscsi/portal_grp.o 00:01:59.549 CC lib/iscsi/tgt_node.o 00:01:59.549 CC lib/iscsi/iscsi_subsystem.o 00:01:59.549 CC lib/iscsi/iscsi_rpc.o 00:01:59.549 CC lib/iscsi/task.o 00:01:59.808 LIB libspdk_nvmf.a 00:02:00.068 LIB libspdk_vhost.a 00:02:00.327 LIB libspdk_iscsi.a 00:02:00.587 CC module/env_dpdk/env_dpdk_rpc.o 00:02:00.587 CC module/vfu_device/vfu_virtio.o 00:02:00.587 CC module/vfu_device/vfu_virtio_blk.o 00:02:00.587 CC module/vfu_device/vfu_virtio_scsi.o 00:02:00.587 CC module/vfu_device/vfu_virtio_rpc.o 00:02:00.845 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:00.845 LIB libspdk_env_dpdk_rpc.a 00:02:00.845 CC module/accel/error/accel_error.o 00:02:00.845 CC module/accel/error/accel_error_rpc.o 00:02:00.845 CC module/sock/posix/posix.o 00:02:00.845 CC module/scheduler/gscheduler/gscheduler.o 00:02:00.845 CC module/accel/dsa/accel_dsa_rpc.o 00:02:00.845 CC module/accel/dsa/accel_dsa.o 00:02:00.845 CC module/accel/ioat/accel_ioat_rpc.o 00:02:00.845 CC module/accel/ioat/accel_ioat.o 00:02:00.845 CC module/accel/iaa/accel_iaa.o 00:02:00.845 CC module/accel/iaa/accel_iaa_rpc.o 00:02:00.845 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:00.845 CC module/blob/bdev/blob_bdev.o 00:02:00.845 LIB libspdk_scheduler_dpdk_governor.a 00:02:00.845 LIB libspdk_scheduler_gscheduler.a 00:02:00.845 LIB libspdk_accel_error.a 00:02:00.845 LIB libspdk_accel_ioat.a 00:02:00.845 LIB libspdk_scheduler_dynamic.a 00:02:00.846 LIB libspdk_accel_iaa.a 00:02:01.104 LIB libspdk_accel_dsa.a 00:02:01.104 LIB libspdk_blob_bdev.a 00:02:01.104 LIB libspdk_vfu_device.a 00:02:01.363 LIB libspdk_sock_posix.a 00:02:01.363 CC module/bdev/lvol/vbdev_lvol.o 00:02:01.363 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:01.363 CC module/bdev/malloc/bdev_malloc.o 00:02:01.363 CC module/bdev/error/vbdev_error.o 00:02:01.363 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:01.363 CC module/bdev/delay/vbdev_delay.o 00:02:01.363 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:01.363 CC module/bdev/error/vbdev_error_rpc.o 00:02:01.363 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:01.363 CC module/blobfs/bdev/blobfs_bdev.o 00:02:01.363 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:01.363 CC module/bdev/nvme/bdev_nvme.o 00:02:01.363 CC module/bdev/split/vbdev_split.o 00:02:01.363 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:01.363 CC module/bdev/passthru/vbdev_passthru.o 00:02:01.363 CC module/bdev/split/vbdev_split_rpc.o 00:02:01.363 CC module/bdev/nvme/vbdev_opal.o 00:02:01.363 CC module/bdev/nvme/bdev_mdns_client.o 00:02:01.363 CC module/bdev/raid/bdev_raid.o 00:02:01.363 CC module/bdev/nvme/nvme_rpc.o 00:02:01.363 CC module/bdev/ftl/bdev_ftl.o 00:02:01.363 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:01.363 CC module/bdev/raid/bdev_raid_rpc.o 00:02:01.363 CC module/bdev/raid/bdev_raid_sb.o 00:02:01.363 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:01.363 CC module/bdev/gpt/gpt.o 00:02:01.363 CC module/bdev/raid/raid0.o 00:02:01.363 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:01.363 CC module/bdev/raid/concat.o 00:02:01.363 CC module/bdev/gpt/vbdev_gpt.o 00:02:01.363 CC module/bdev/raid/raid1.o 00:02:01.363 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:01.363 CC module/bdev/null/bdev_null.o 00:02:01.363 CC module/bdev/null/bdev_null_rpc.o 00:02:01.363 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:01.363 CC module/bdev/aio/bdev_aio.o 00:02:01.363 CC module/bdev/aio/bdev_aio_rpc.o 00:02:01.363 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:01.363 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:01.363 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:01.363 CC module/bdev/iscsi/bdev_iscsi.o 00:02:01.363 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:01.621 LIB libspdk_bdev_split.a 00:02:01.621 LIB libspdk_bdev_error.a 00:02:01.621 LIB libspdk_bdev_gpt.a 00:02:01.621 LIB libspdk_bdev_ftl.a 00:02:01.621 LIB libspdk_blobfs_bdev.a 00:02:01.621 LIB libspdk_bdev_aio.a 00:02:01.621 LIB libspdk_bdev_zone_block.a 00:02:01.621 LIB libspdk_bdev_null.a 00:02:01.621 LIB libspdk_bdev_passthru.a 00:02:01.621 LIB libspdk_bdev_lvol.a 00:02:01.881 LIB libspdk_bdev_malloc.a 00:02:01.881 LIB libspdk_bdev_delay.a 00:02:01.881 LIB libspdk_bdev_iscsi.a 00:02:01.881 LIB libspdk_bdev_virtio.a 00:02:01.881 LIB libspdk_bdev_raid.a 00:02:02.821 LIB libspdk_bdev_nvme.a 00:02:03.081 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:03.081 CC module/event/subsystems/iobuf/iobuf.o 00:02:03.081 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:03.081 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:03.081 CC module/event/subsystems/vmd/vmd.o 00:02:03.081 CC module/event/subsystems/scheduler/scheduler.o 00:02:03.081 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:03.340 CC module/event/subsystems/sock/sock.o 00:02:03.340 LIB libspdk_event_vfu_tgt.a 00:02:03.340 LIB libspdk_event_iobuf.a 00:02:03.340 LIB libspdk_event_sock.a 00:02:03.340 LIB libspdk_event_vhost_blk.a 00:02:03.340 LIB libspdk_event_scheduler.a 00:02:03.340 LIB libspdk_event_vmd.a 00:02:03.599 CC module/event/subsystems/accel/accel.o 00:02:03.599 LIB libspdk_event_accel.a 00:02:04.169 CC module/event/subsystems/bdev/bdev.o 00:02:04.169 LIB libspdk_event_bdev.a 00:02:04.427 CC module/event/subsystems/scsi/scsi.o 00:02:04.427 CC module/event/subsystems/nbd/nbd.o 00:02:04.427 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:04.427 CC module/event/subsystems/ublk/ublk.o 00:02:04.427 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:04.686 LIB libspdk_event_scsi.a 00:02:04.686 LIB libspdk_event_ublk.a 00:02:04.686 LIB libspdk_event_nbd.a 00:02:04.686 LIB libspdk_event_nvmf.a 00:02:04.946 CC module/event/subsystems/iscsi/iscsi.o 00:02:04.946 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:04.946 LIB libspdk_event_vhost_scsi.a 00:02:04.946 LIB libspdk_event_iscsi.a 00:02:05.205 CC app/trace_record/trace_record.o 00:02:05.205 CC app/spdk_top/spdk_top.o 00:02:05.205 CC app/spdk_nvme_identify/identify.o 00:02:05.205 CXX app/trace/trace.o 00:02:05.205 CC app/spdk_nvme_perf/perf.o 00:02:05.205 CC test/rpc_client/rpc_client_test.o 00:02:05.205 CC app/spdk_nvme_discover/discovery_aer.o 00:02:05.469 CC app/spdk_lspci/spdk_lspci.o 00:02:05.469 TEST_HEADER include/spdk/accel.h 00:02:05.469 TEST_HEADER include/spdk/accel_module.h 00:02:05.469 TEST_HEADER include/spdk/assert.h 00:02:05.469 TEST_HEADER include/spdk/barrier.h 00:02:05.469 TEST_HEADER include/spdk/base64.h 00:02:05.469 TEST_HEADER include/spdk/bdev.h 00:02:05.469 TEST_HEADER include/spdk/bdev_module.h 00:02:05.469 TEST_HEADER include/spdk/bdev_zone.h 00:02:05.469 TEST_HEADER include/spdk/bit_array.h 00:02:05.469 TEST_HEADER include/spdk/blob_bdev.h 00:02:05.469 TEST_HEADER include/spdk/bit_pool.h 00:02:05.469 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:05.469 TEST_HEADER include/spdk/blobfs.h 00:02:05.469 TEST_HEADER include/spdk/blob.h 00:02:05.469 TEST_HEADER include/spdk/conf.h 00:02:05.469 TEST_HEADER include/spdk/config.h 00:02:05.469 TEST_HEADER include/spdk/cpuset.h 00:02:05.469 TEST_HEADER include/spdk/crc16.h 00:02:05.469 TEST_HEADER include/spdk/crc32.h 00:02:05.469 TEST_HEADER include/spdk/crc64.h 00:02:05.469 TEST_HEADER include/spdk/dif.h 00:02:05.469 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:05.469 TEST_HEADER include/spdk/dma.h 00:02:05.469 TEST_HEADER include/spdk/endian.h 00:02:05.469 TEST_HEADER include/spdk/env_dpdk.h 00:02:05.469 TEST_HEADER include/spdk/env.h 00:02:05.469 CC app/nvmf_tgt/nvmf_main.o 00:02:05.469 CC app/spdk_dd/spdk_dd.o 00:02:05.469 TEST_HEADER include/spdk/event.h 00:02:05.469 TEST_HEADER include/spdk/fd_group.h 00:02:05.469 CC app/iscsi_tgt/iscsi_tgt.o 00:02:05.469 TEST_HEADER include/spdk/fd.h 00:02:05.469 CC app/vhost/vhost.o 00:02:05.469 TEST_HEADER include/spdk/file.h 00:02:05.469 TEST_HEADER include/spdk/ftl.h 00:02:05.469 TEST_HEADER include/spdk/gpt_spec.h 00:02:05.469 TEST_HEADER include/spdk/hexlify.h 00:02:05.469 TEST_HEADER include/spdk/histogram_data.h 00:02:05.469 TEST_HEADER include/spdk/idxd.h 00:02:05.469 TEST_HEADER include/spdk/idxd_spec.h 00:02:05.469 TEST_HEADER include/spdk/init.h 00:02:05.469 TEST_HEADER include/spdk/ioat.h 00:02:05.469 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:05.469 CC test/env/vtophys/vtophys.o 00:02:05.469 CC test/env/pci/pci_ut.o 00:02:05.469 CC test/event/reactor/reactor.o 00:02:05.469 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:05.469 TEST_HEADER include/spdk/ioat_spec.h 00:02:05.469 CC examples/nvme/hello_world/hello_world.o 00:02:05.469 CC examples/ioat/verify/verify.o 00:02:05.469 CC examples/idxd/perf/perf.o 00:02:05.469 CC app/spdk_tgt/spdk_tgt.o 00:02:05.469 CC test/app/histogram_perf/histogram_perf.o 00:02:05.469 CC examples/nvme/arbitration/arbitration.o 00:02:05.469 CC examples/accel/perf/accel_perf.o 00:02:05.469 TEST_HEADER include/spdk/iscsi_spec.h 00:02:05.469 CC test/env/memory/memory_ut.o 00:02:05.469 CC examples/nvme/hotplug/hotplug.o 00:02:05.469 CC examples/nvme/abort/abort.o 00:02:05.469 CC test/event/event_perf/event_perf.o 00:02:05.470 TEST_HEADER include/spdk/json.h 00:02:05.470 CC examples/ioat/perf/perf.o 00:02:05.470 CC test/event/reactor_perf/reactor_perf.o 00:02:05.470 TEST_HEADER include/spdk/jsonrpc.h 00:02:05.470 CC test/app/stub/stub.o 00:02:05.470 CC examples/nvme/reconnect/reconnect.o 00:02:05.470 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:05.470 CC test/app/jsoncat/jsoncat.o 00:02:05.470 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:05.470 CC app/fio/nvme/fio_plugin.o 00:02:05.470 TEST_HEADER include/spdk/likely.h 00:02:05.470 CC examples/sock/hello_world/hello_sock.o 00:02:05.470 TEST_HEADER include/spdk/log.h 00:02:05.470 CC test/thread/lock/spdk_lock.o 00:02:05.470 CC examples/util/zipf/zipf.o 00:02:05.470 CC examples/vmd/led/led.o 00:02:05.470 TEST_HEADER include/spdk/lvol.h 00:02:05.470 CC test/nvme/startup/startup.o 00:02:05.470 CC test/thread/poller_perf/poller_perf.o 00:02:05.470 TEST_HEADER include/spdk/memory.h 00:02:05.470 TEST_HEADER include/spdk/mmio.h 00:02:05.470 CC test/nvme/aer/aer.o 00:02:05.470 TEST_HEADER include/spdk/nbd.h 00:02:05.470 CC test/nvme/reset/reset.o 00:02:05.470 CC examples/vmd/lsvmd/lsvmd.o 00:02:05.470 CC test/nvme/boot_partition/boot_partition.o 00:02:05.470 TEST_HEADER include/spdk/notify.h 00:02:05.470 CC test/nvme/simple_copy/simple_copy.o 00:02:05.470 CC test/nvme/err_injection/err_injection.o 00:02:05.470 CC test/nvme/e2edp/nvme_dp.o 00:02:05.470 TEST_HEADER include/spdk/nvme.h 00:02:05.470 CC test/nvme/sgl/sgl.o 00:02:05.470 CC test/nvme/connect_stress/connect_stress.o 00:02:05.470 TEST_HEADER include/spdk/nvme_intel.h 00:02:05.470 CC test/nvme/overhead/overhead.o 00:02:05.470 CC test/event/app_repeat/app_repeat.o 00:02:05.470 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:05.470 CC test/nvme/reserve/reserve.o 00:02:05.470 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:05.470 TEST_HEADER include/spdk/nvme_spec.h 00:02:05.470 TEST_HEADER include/spdk/nvme_zns.h 00:02:05.470 CC app/fio/bdev/fio_plugin.o 00:02:05.470 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:05.470 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:05.470 CC examples/blob/hello_world/hello_blob.o 00:02:05.470 TEST_HEADER include/spdk/nvmf.h 00:02:05.470 CC examples/thread/thread/thread_ex.o 00:02:05.470 TEST_HEADER include/spdk/nvmf_spec.h 00:02:05.470 CC test/blobfs/mkfs/mkfs.o 00:02:05.470 CC test/accel/dif/dif.o 00:02:05.470 TEST_HEADER include/spdk/nvmf_transport.h 00:02:05.470 CC test/dma/test_dma/test_dma.o 00:02:05.470 CC test/app/bdev_svc/bdev_svc.o 00:02:05.470 CC examples/bdev/hello_world/hello_bdev.o 00:02:05.470 CC examples/blob/cli/blobcli.o 00:02:05.470 TEST_HEADER include/spdk/opal.h 00:02:05.470 TEST_HEADER include/spdk/opal_spec.h 00:02:05.470 LINK spdk_lspci 00:02:05.470 CC examples/nvmf/nvmf/nvmf.o 00:02:05.470 TEST_HEADER include/spdk/pci_ids.h 00:02:05.470 CC examples/bdev/bdevperf/bdevperf.o 00:02:05.470 CC test/event/scheduler/scheduler.o 00:02:05.470 TEST_HEADER include/spdk/pipe.h 00:02:05.470 TEST_HEADER include/spdk/queue.h 00:02:05.470 CC test/bdev/bdevio/bdevio.o 00:02:05.470 TEST_HEADER include/spdk/reduce.h 00:02:05.470 TEST_HEADER include/spdk/rpc.h 00:02:05.470 TEST_HEADER include/spdk/scheduler.h 00:02:05.470 CC test/env/mem_callbacks/mem_callbacks.o 00:02:05.470 TEST_HEADER include/spdk/scsi.h 00:02:05.470 TEST_HEADER include/spdk/scsi_spec.h 00:02:05.470 LINK rpc_client_test 00:02:05.470 TEST_HEADER include/spdk/sock.h 00:02:05.470 TEST_HEADER include/spdk/stdinc.h 00:02:05.470 CC test/lvol/esnap/esnap.o 00:02:05.470 TEST_HEADER include/spdk/string.h 00:02:05.470 TEST_HEADER include/spdk/thread.h 00:02:05.470 TEST_HEADER include/spdk/trace.h 00:02:05.470 TEST_HEADER include/spdk/trace_parser.h 00:02:05.470 TEST_HEADER include/spdk/tree.h 00:02:05.470 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:05.470 TEST_HEADER include/spdk/ublk.h 00:02:05.470 TEST_HEADER include/spdk/util.h 00:02:05.470 LINK spdk_nvme_discover 00:02:05.470 TEST_HEADER include/spdk/uuid.h 00:02:05.470 TEST_HEADER include/spdk/version.h 00:02:05.470 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:05.470 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:05.470 TEST_HEADER include/spdk/vhost.h 00:02:05.735 TEST_HEADER include/spdk/vmd.h 00:02:05.735 TEST_HEADER include/spdk/xor.h 00:02:05.735 TEST_HEADER include/spdk/zipf.h 00:02:05.735 CXX test/cpp_headers/accel.o 00:02:05.735 LINK interrupt_tgt 00:02:05.735 LINK spdk_trace_record 00:02:05.735 LINK reactor 00:02:05.735 LINK vtophys 00:02:05.735 LINK histogram_perf 00:02:05.735 LINK event_perf 00:02:05.735 LINK reactor_perf 00:02:05.735 LINK jsoncat 00:02:05.735 LINK lsvmd 00:02:05.735 LINK led 00:02:05.735 LINK env_dpdk_post_init 00:02:05.735 LINK nvmf_tgt 00:02:05.735 LINK poller_perf 00:02:05.735 LINK zipf 00:02:05.735 LINK vhost 00:02:05.735 LINK iscsi_tgt 00:02:05.735 LINK pmr_persistence 00:02:05.735 LINK app_repeat 00:02:05.735 LINK stub 00:02:05.735 LINK startup 00:02:05.735 LINK boot_partition 00:02:05.735 LINK connect_stress 00:02:05.735 LINK cmb_copy 00:02:05.735 LINK err_injection 00:02:05.735 LINK hello_world 00:02:05.735 LINK verify 00:02:05.735 LINK ioat_perf 00:02:05.735 LINK reserve 00:02:05.735 LINK hotplug 00:02:05.735 LINK spdk_tgt 00:02:05.735 LINK hello_sock 00:02:05.735 LINK bdev_svc 00:02:05.735 LINK simple_copy 00:02:05.736 LINK mkfs 00:02:05.736 LINK reset 00:02:05.736 LINK aer 00:02:05.736 LINK hello_blob 00:02:05.736 LINK nvme_dp 00:02:05.736 LINK thread 00:02:05.736 CXX test/cpp_headers/accel_module.o 00:02:05.736 LINK overhead 00:02:05.736 LINK hello_bdev 00:02:05.736 LINK scheduler 00:02:05.736 LINK spdk_trace 00:02:05.736 LINK sgl 00:02:05.736 LINK idxd_perf 00:02:05.997 LINK reconnect 00:02:05.997 LINK nvmf 00:02:05.997 LINK abort 00:02:05.997 LINK arbitration 00:02:05.997 LINK accel_perf 00:02:05.997 LINK test_dma 00:02:05.997 LINK pci_ut 00:02:05.998 LINK nvme_manage 00:02:05.998 LINK dif 00:02:05.998 LINK spdk_dd 00:02:05.998 LINK bdevio 00:02:05.998 CXX test/cpp_headers/assert.o 00:02:06.261 LINK nvme_fuzz 00:02:06.261 LINK blobcli 00:02:06.261 LINK spdk_nvme 00:02:06.261 CXX test/cpp_headers/barrier.o 00:02:06.261 LINK mem_callbacks 00:02:06.261 LINK spdk_bdev 00:02:06.261 LINK spdk_nvme_identify 00:02:06.261 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:06.261 CXX test/cpp_headers/base64.o 00:02:06.261 CC test/nvme/compliance/nvme_compliance.o 00:02:06.261 CXX test/cpp_headers/bdev.o 00:02:06.525 LINK spdk_nvme_perf 00:02:06.525 CXX test/cpp_headers/bdev_module.o 00:02:06.525 CXX test/cpp_headers/bdev_zone.o 00:02:06.525 CXX test/cpp_headers/bit_array.o 00:02:06.525 CC test/nvme/fused_ordering/fused_ordering.o 00:02:06.525 CC test/nvme/fdp/fdp.o 00:02:06.525 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:06.525 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:06.526 CC test/nvme/cuse/cuse.o 00:02:06.526 LINK spdk_top 00:02:06.526 LINK bdevperf 00:02:06.526 CXX test/cpp_headers/bit_pool.o 00:02:06.526 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:06.526 LINK memory_ut 00:02:06.526 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:06.526 CXX test/cpp_headers/blob_bdev.o 00:02:06.526 CXX test/cpp_headers/blobfs_bdev.o 00:02:06.526 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:06.526 CXX test/cpp_headers/blobfs.o 00:02:06.786 CXX test/cpp_headers/blob.o 00:02:06.786 CXX test/cpp_headers/conf.o 00:02:06.786 CXX test/cpp_headers/config.o 00:02:06.786 CXX test/cpp_headers/cpuset.o 00:02:06.786 CXX test/cpp_headers/crc16.o 00:02:06.786 CXX test/cpp_headers/crc32.o 00:02:06.786 CXX test/cpp_headers/crc64.o 00:02:06.786 LINK doorbell_aers 00:02:06.786 CXX test/cpp_headers/dif.o 00:02:06.786 CXX test/cpp_headers/dma.o 00:02:06.786 CXX test/cpp_headers/endian.o 00:02:06.786 CXX test/cpp_headers/env_dpdk.o 00:02:06.786 LINK fused_ordering 00:02:06.786 CXX test/cpp_headers/env.o 00:02:06.786 CXX test/cpp_headers/event.o 00:02:06.786 CXX test/cpp_headers/fd_group.o 00:02:06.786 LINK fdp 00:02:06.786 CXX test/cpp_headers/fd.o 00:02:06.786 CXX test/cpp_headers/file.o 00:02:06.786 CXX test/cpp_headers/ftl.o 00:02:06.786 CXX test/cpp_headers/gpt_spec.o 00:02:06.786 CXX test/cpp_headers/hexlify.o 00:02:06.786 CXX test/cpp_headers/histogram_data.o 00:02:06.786 CXX test/cpp_headers/idxd.o 00:02:06.786 CXX test/cpp_headers/idxd_spec.o 00:02:06.786 LINK nvme_compliance 00:02:06.786 CXX test/cpp_headers/init.o 00:02:07.049 CXX test/cpp_headers/ioat.o 00:02:07.049 CXX test/cpp_headers/ioat_spec.o 00:02:07.049 CXX test/cpp_headers/iscsi_spec.o 00:02:07.049 CXX test/cpp_headers/json.o 00:02:07.049 CXX test/cpp_headers/jsonrpc.o 00:02:07.049 CXX test/cpp_headers/likely.o 00:02:07.049 CXX test/cpp_headers/log.o 00:02:07.049 CXX test/cpp_headers/lvol.o 00:02:07.049 CXX test/cpp_headers/memory.o 00:02:07.049 CXX test/cpp_headers/mmio.o 00:02:07.049 CXX test/cpp_headers/nbd.o 00:02:07.049 CXX test/cpp_headers/notify.o 00:02:07.049 CXX test/cpp_headers/nvme.o 00:02:07.049 CXX test/cpp_headers/nvme_intel.o 00:02:07.049 CXX test/cpp_headers/nvme_ocssd.o 00:02:07.049 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:07.049 LINK llvm_vfio_fuzz 00:02:07.049 CXX test/cpp_headers/nvme_spec.o 00:02:07.049 CXX test/cpp_headers/nvme_zns.o 00:02:07.049 CXX test/cpp_headers/nvmf_cmd.o 00:02:07.049 CXX test/cpp_headers/nvmf.o 00:02:07.049 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:07.049 CXX test/cpp_headers/nvmf_spec.o 00:02:07.049 CXX test/cpp_headers/nvmf_transport.o 00:02:07.049 CXX test/cpp_headers/opal.o 00:02:07.049 CXX test/cpp_headers/opal_spec.o 00:02:07.049 CXX test/cpp_headers/pci_ids.o 00:02:07.049 CXX test/cpp_headers/pipe.o 00:02:07.049 CXX test/cpp_headers/queue.o 00:02:07.049 CXX test/cpp_headers/reduce.o 00:02:07.049 CXX test/cpp_headers/rpc.o 00:02:07.049 CXX test/cpp_headers/scheduler.o 00:02:07.049 CXX test/cpp_headers/scsi.o 00:02:07.049 CXX test/cpp_headers/scsi_spec.o 00:02:07.049 CXX test/cpp_headers/sock.o 00:02:07.049 CXX test/cpp_headers/stdinc.o 00:02:07.049 CXX test/cpp_headers/string.o 00:02:07.049 CXX test/cpp_headers/thread.o 00:02:07.049 CXX test/cpp_headers/trace.o 00:02:07.049 CXX test/cpp_headers/trace_parser.o 00:02:07.049 CXX test/cpp_headers/tree.o 00:02:07.049 CXX test/cpp_headers/ublk.o 00:02:07.049 LINK vhost_fuzz 00:02:07.049 CXX test/cpp_headers/util.o 00:02:07.049 CXX test/cpp_headers/uuid.o 00:02:07.049 CXX test/cpp_headers/version.o 00:02:07.049 CXX test/cpp_headers/vfio_user_pci.o 00:02:07.049 CXX test/cpp_headers/vfio_user_spec.o 00:02:07.309 CXX test/cpp_headers/vhost.o 00:02:07.309 CXX test/cpp_headers/vmd.o 00:02:07.309 CXX test/cpp_headers/xor.o 00:02:07.309 CXX test/cpp_headers/zipf.o 00:02:07.309 LINK llvm_nvme_fuzz 00:02:07.309 LINK spdk_lock 00:02:07.570 LINK cuse 00:02:07.829 LINK iscsi_fuzz 00:02:09.739 LINK esnap 00:02:09.739 00:02:09.739 real 0m43.634s 00:02:09.739 user 6m19.914s 00:02:09.739 sys 2m31.652s 00:02:09.739 14:21:52 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:09.739 14:21:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.739 ************************************ 00:02:09.739 END TEST make 00:02:09.739 ************************************ 00:02:09.999 14:21:52 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:09.999 14:21:52 -- nvmf/common.sh@7 -- # uname -s 00:02:09.999 14:21:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:09.999 14:21:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:09.999 14:21:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:09.999 14:21:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:09.999 14:21:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:09.999 14:21:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:09.999 14:21:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:09.999 14:21:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:09.999 14:21:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:09.999 14:21:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:09.999 14:21:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:02:09.999 14:21:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:02:09.999 14:21:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:09.999 14:21:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:09.999 14:21:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:09.999 14:21:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:09.999 14:21:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:09.999 14:21:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:09.999 14:21:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:09.999 14:21:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.999 14:21:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.999 14:21:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.999 14:21:52 -- paths/export.sh@5 -- # export PATH 00:02:09.999 14:21:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.999 14:21:52 -- nvmf/common.sh@46 -- # : 0 00:02:09.999 14:21:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:09.999 14:21:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:09.999 14:21:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:09.999 14:21:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:09.999 14:21:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:09.999 14:21:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:09.999 14:21:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:09.999 14:21:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:09.999 14:21:52 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:09.999 14:21:52 -- spdk/autotest.sh@32 -- # uname -s 00:02:09.999 14:21:52 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:09.999 14:21:52 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:10.000 14:21:52 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:10.000 14:21:52 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:10.000 14:21:52 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:10.000 14:21:52 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:10.000 14:21:52 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:10.000 14:21:52 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:10.000 14:21:52 -- spdk/autotest.sh@48 -- # udevadm_pid=630843 00:02:10.000 14:21:52 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:02:10.000 14:21:52 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:10.000 14:21:52 -- spdk/autotest.sh@54 -- # echo 630845 00:02:10.000 14:21:52 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:02:10.000 14:21:52 -- spdk/autotest.sh@56 -- # echo 630846 00:02:10.000 14:21:52 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:02:10.000 14:21:52 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:10.000 14:21:52 -- spdk/autotest.sh@60 -- # echo 630847 00:02:10.000 14:21:52 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l 00:02:10.000 14:21:52 -- spdk/autotest.sh@62 -- # echo 630848 00:02:10.000 14:21:52 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:10.000 14:21:52 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l 00:02:10.000 14:21:52 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:10.000 14:21:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:10.000 14:21:52 -- common/autotest_common.sh@10 -- # set +x 00:02:10.000 14:21:52 -- spdk/autotest.sh@70 -- # create_test_list 00:02:10.000 14:21:52 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:10.000 14:21:52 -- common/autotest_common.sh@10 -- # set +x 00:02:10.000 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:10.000 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:10.000 14:21:52 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:10.000 14:21:52 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:10.000 14:21:52 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:10.000 14:21:52 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:10.000 14:21:52 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:10.000 14:21:52 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:10.000 14:21:52 -- common/autotest_common.sh@1440 -- # uname 00:02:10.000 14:21:52 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:10.000 14:21:52 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:10.000 14:21:52 -- common/autotest_common.sh@1460 -- # uname 00:02:10.000 14:21:52 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:10.000 14:21:52 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:10.000 14:21:52 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=clang 00:02:10.000 14:21:52 -- spdk/autotest.sh@83 -- # hash lcov 00:02:10.000 14:21:52 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:02:10.000 14:21:52 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:02:10.000 14:21:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:10.000 14:21:52 -- common/autotest_common.sh@10 -- # set +x 00:02:10.000 14:21:52 -- spdk/autotest.sh@102 -- # rm -f 00:02:10.000 14:21:52 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:14.193 0000:1a:00.0 (8086 0a54): Already using the nvme driver 00:02:14.193 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:14.193 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:14.193 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:14.193 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:14.193 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:14.193 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:14.193 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:14.193 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:14.193 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:14.193 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:14.193 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:14.193 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:14.193 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:14.193 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:14.193 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:14.193 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:16.095 14:21:58 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:02:16.095 14:21:58 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:16.095 14:21:58 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:16.095 14:21:58 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:16.095 14:21:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:16.095 14:21:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:16.095 14:21:58 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:16.095 14:21:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:16.095 14:21:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:16.095 14:21:58 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:02:16.095 14:21:58 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:02:16.095 14:21:58 -- spdk/autotest.sh@121 -- # grep -v p 00:02:16.095 14:21:58 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:16.095 14:21:58 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:02:16.095 14:21:58 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:02:16.095 14:21:58 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:02:16.095 14:21:58 -- scripts/common.sh@389 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:16.354 No valid GPT data, bailing 00:02:16.354 14:21:58 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:16.354 14:21:58 -- scripts/common.sh@393 -- # pt= 00:02:16.355 14:21:58 -- scripts/common.sh@394 -- # return 1 00:02:16.355 14:21:58 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:16.355 1+0 records in 00:02:16.355 1+0 records out 00:02:16.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00789186 s, 133 MB/s 00:02:16.355 14:21:58 -- spdk/autotest.sh@129 -- # sync 00:02:16.355 14:21:58 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:16.355 14:21:58 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:16.355 14:21:58 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:21.632 14:22:03 -- spdk/autotest.sh@135 -- # uname -s 00:02:21.632 14:22:04 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:02:21.632 14:22:04 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:21.632 14:22:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:21.632 14:22:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:21.632 14:22:04 -- common/autotest_common.sh@10 -- # set +x 00:02:21.632 ************************************ 00:02:21.632 START TEST setup.sh 00:02:21.632 ************************************ 00:02:21.632 14:22:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:21.632 * Looking for test storage... 00:02:21.632 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:21.632 14:22:04 -- setup/test-setup.sh@10 -- # uname -s 00:02:21.632 14:22:04 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:21.632 14:22:04 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:21.632 14:22:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:21.632 14:22:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:21.632 14:22:04 -- common/autotest_common.sh@10 -- # set +x 00:02:21.632 ************************************ 00:02:21.632 START TEST acl 00:02:21.632 ************************************ 00:02:21.632 14:22:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:21.891 * Looking for test storage... 00:02:21.891 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:21.891 14:22:04 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:21.891 14:22:04 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:21.891 14:22:04 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:21.891 14:22:04 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:21.891 14:22:04 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:21.891 14:22:04 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:21.891 14:22:04 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:21.891 14:22:04 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:21.891 14:22:04 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:21.891 14:22:04 -- setup/acl.sh@12 -- # devs=() 00:02:21.891 14:22:04 -- setup/acl.sh@12 -- # declare -a devs 00:02:21.891 14:22:04 -- setup/acl.sh@13 -- # drivers=() 00:02:21.891 14:22:04 -- setup/acl.sh@13 -- # declare -A drivers 00:02:21.891 14:22:04 -- setup/acl.sh@51 -- # setup reset 00:02:21.891 14:22:04 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:21.891 14:22:04 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:28.468 14:22:10 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:28.468 14:22:10 -- setup/acl.sh@16 -- # local dev driver 00:02:28.468 14:22:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:28.468 14:22:10 -- setup/acl.sh@15 -- # setup output status 00:02:28.468 14:22:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:28.468 14:22:10 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:02:31.760 Hugepages 00:02:31.760 node hugesize free / total 00:02:31.760 14:22:13 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:31.760 14:22:13 -- setup/acl.sh@19 -- # continue 00:02:31.760 14:22:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.760 14:22:13 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:31.760 14:22:13 -- setup/acl.sh@19 -- # continue 00:02:31.760 14:22:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.760 14:22:13 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:31.760 14:22:13 -- setup/acl.sh@19 -- # continue 00:02:31.760 14:22:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.760 00:02:31.760 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:31.760 14:22:13 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:31.760 14:22:13 -- setup/acl.sh@19 -- # continue 00:02:31.760 14:22:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.760 14:22:13 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:31.760 14:22:13 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.760 14:22:13 -- setup/acl.sh@20 -- # continue 00:02:31.760 14:22:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.760 14:22:13 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:31.760 14:22:13 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.760 14:22:13 -- setup/acl.sh@20 -- # continue 00:02:31.760 14:22:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.760 14:22:13 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:31.760 14:22:13 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.760 14:22:13 -- setup/acl.sh@20 -- # continue 00:02:31.760 14:22:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.760 14:22:13 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:31.760 14:22:13 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.760 14:22:13 -- setup/acl.sh@20 -- # continue 00:02:31.760 14:22:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.760 14:22:13 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:31.760 14:22:13 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.760 14:22:13 -- setup/acl.sh@20 -- # continue 00:02:31.760 14:22:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.760 14:22:13 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:31.760 14:22:13 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.760 14:22:13 -- setup/acl.sh@20 -- # continue 00:02:31.760 14:22:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.760 14:22:13 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:31.760 14:22:13 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.760 14:22:13 -- setup/acl.sh@20 -- # continue 00:02:31.760 14:22:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.760 14:22:13 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:31.760 14:22:13 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.760 14:22:13 -- setup/acl.sh@20 -- # continue 00:02:31.760 14:22:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.760 14:22:14 -- setup/acl.sh@19 -- # [[ 0000:1a:00.0 == *:*:*.* ]] 00:02:31.760 14:22:14 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:31.760 14:22:14 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\1\a\:\0\0\.\0* ]] 00:02:31.760 14:22:14 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:31.761 14:22:14 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:31.761 14:22:14 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.761 14:22:14 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:31.761 14:22:14 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.761 14:22:14 -- setup/acl.sh@20 -- # continue 00:02:31.761 14:22:14 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.761 14:22:14 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:31.761 14:22:14 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.761 14:22:14 -- setup/acl.sh@20 -- # continue 00:02:31.761 14:22:14 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.761 14:22:14 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:31.761 14:22:14 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.761 14:22:14 -- setup/acl.sh@20 -- # continue 00:02:31.761 14:22:14 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.761 14:22:14 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:31.761 14:22:14 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.761 14:22:14 -- setup/acl.sh@20 -- # continue 00:02:31.761 14:22:14 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.761 14:22:14 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:31.761 14:22:14 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.761 14:22:14 -- setup/acl.sh@20 -- # continue 00:02:31.761 14:22:14 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.761 14:22:14 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:31.761 14:22:14 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.761 14:22:14 -- setup/acl.sh@20 -- # continue 00:02:31.761 14:22:14 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.761 14:22:14 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:31.761 14:22:14 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.761 14:22:14 -- setup/acl.sh@20 -- # continue 00:02:31.761 14:22:14 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.761 14:22:14 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:31.761 14:22:14 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.761 14:22:14 -- setup/acl.sh@20 -- # continue 00:02:31.761 14:22:14 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.761 14:22:14 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:31.761 14:22:14 -- setup/acl.sh@54 -- # run_test denied denied 00:02:31.761 14:22:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:31.761 14:22:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:31.761 14:22:14 -- common/autotest_common.sh@10 -- # set +x 00:02:31.761 ************************************ 00:02:31.761 START TEST denied 00:02:31.761 ************************************ 00:02:31.761 14:22:14 -- common/autotest_common.sh@1104 -- # denied 00:02:31.761 14:22:14 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:1a:00.0' 00:02:31.761 14:22:14 -- setup/acl.sh@38 -- # setup output config 00:02:31.761 14:22:14 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:1a:00.0' 00:02:31.761 14:22:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:31.761 14:22:14 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:38.336 0000:1a:00.0 (8086 0a54): Skipping denied controller at 0000:1a:00.0 00:02:38.336 14:22:20 -- setup/acl.sh@40 -- # verify 0000:1a:00.0 00:02:38.336 14:22:20 -- setup/acl.sh@28 -- # local dev driver 00:02:38.336 14:22:20 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:38.336 14:22:20 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:1a:00.0 ]] 00:02:38.336 14:22:20 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:1a:00.0/driver 00:02:38.336 14:22:20 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:38.336 14:22:20 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:38.336 14:22:20 -- setup/acl.sh@41 -- # setup reset 00:02:38.336 14:22:20 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:38.336 14:22:20 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:44.917 00:02:44.917 real 0m13.136s 00:02:44.917 user 0m4.172s 00:02:44.917 sys 0m8.168s 00:02:44.917 14:22:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:44.917 14:22:27 -- common/autotest_common.sh@10 -- # set +x 00:02:44.917 ************************************ 00:02:44.917 END TEST denied 00:02:44.917 ************************************ 00:02:44.917 14:22:27 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:44.917 14:22:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:44.917 14:22:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:44.917 14:22:27 -- common/autotest_common.sh@10 -- # set +x 00:02:44.917 ************************************ 00:02:44.917 START TEST allowed 00:02:44.917 ************************************ 00:02:44.917 14:22:27 -- common/autotest_common.sh@1104 -- # allowed 00:02:44.917 14:22:27 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:1a:00.0 00:02:44.917 14:22:27 -- setup/acl.sh@45 -- # setup output config 00:02:44.917 14:22:27 -- setup/acl.sh@46 -- # grep -E '0000:1a:00.0 .*: nvme -> .*' 00:02:44.917 14:22:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:44.917 14:22:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:54.905 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:02:54.905 14:22:36 -- setup/acl.sh@47 -- # verify 00:02:54.905 14:22:36 -- setup/acl.sh@28 -- # local dev driver 00:02:54.905 14:22:36 -- setup/acl.sh@48 -- # setup reset 00:02:54.905 14:22:36 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:54.905 14:22:36 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:01.485 00:03:01.485 real 0m15.396s 00:03:01.485 user 0m4.286s 00:03:01.485 sys 0m7.928s 00:03:01.485 14:22:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:01.485 14:22:42 -- common/autotest_common.sh@10 -- # set +x 00:03:01.485 ************************************ 00:03:01.485 END TEST allowed 00:03:01.485 ************************************ 00:03:01.485 00:03:01.485 real 0m38.653s 00:03:01.485 user 0m11.956s 00:03:01.485 sys 0m22.965s 00:03:01.485 14:22:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:01.485 14:22:42 -- common/autotest_common.sh@10 -- # set +x 00:03:01.485 ************************************ 00:03:01.485 END TEST acl 00:03:01.485 ************************************ 00:03:01.485 14:22:42 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:01.485 14:22:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:01.485 14:22:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:01.485 14:22:42 -- common/autotest_common.sh@10 -- # set +x 00:03:01.485 ************************************ 00:03:01.485 START TEST hugepages 00:03:01.485 ************************************ 00:03:01.485 14:22:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:01.485 * Looking for test storage... 00:03:01.485 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:01.485 14:22:42 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:01.485 14:22:42 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:01.485 14:22:42 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:01.485 14:22:42 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:01.485 14:22:42 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:01.485 14:22:42 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:01.485 14:22:42 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:01.485 14:22:42 -- setup/common.sh@18 -- # local node= 00:03:01.485 14:22:42 -- setup/common.sh@19 -- # local var val 00:03:01.485 14:22:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:01.485 14:22:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.485 14:22:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.485 14:22:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.485 14:22:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.485 14:22:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285420 kB' 'MemFree: 73628996 kB' 'MemAvailable: 77234376 kB' 'Buffers: 9752 kB' 'Cached: 11302440 kB' 'SwapCached: 0 kB' 'Active: 8096188 kB' 'Inactive: 3770068 kB' 'Active(anon): 7701780 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3770068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557948 kB' 'Mapped: 159100 kB' 'Shmem: 7147716 kB' 'KReclaimable: 188540 kB' 'Slab: 679508 kB' 'SReclaimable: 188540 kB' 'SUnreclaim: 490968 kB' 'KernelStack: 17424 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52434160 kB' 'Committed_AS: 8921076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212116 kB' 'VmallocChunk: 0 kB' 'Percpu: 47520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 529848 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 91226112 kB' 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.485 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.485 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # continue 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.486 14:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.486 14:22:42 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.486 14:22:42 -- setup/common.sh@33 -- # echo 2048 00:03:01.486 14:22:42 -- setup/common.sh@33 -- # return 0 00:03:01.486 14:22:42 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:01.486 14:22:42 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:01.486 14:22:42 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:01.486 14:22:42 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:01.486 14:22:42 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:01.486 14:22:42 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:01.486 14:22:42 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:01.486 14:22:42 -- setup/hugepages.sh@207 -- # get_nodes 00:03:01.486 14:22:42 -- setup/hugepages.sh@27 -- # local node 00:03:01.486 14:22:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.486 14:22:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:01.486 14:22:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.486 14:22:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:01.486 14:22:42 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:01.486 14:22:42 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:01.486 14:22:42 -- setup/hugepages.sh@208 -- # clear_hp 00:03:01.486 14:22:42 -- setup/hugepages.sh@37 -- # local node hp 00:03:01.486 14:22:42 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:01.486 14:22:42 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:01.486 14:22:42 -- setup/hugepages.sh@41 -- # echo 0 00:03:01.486 14:22:42 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:01.486 14:22:42 -- setup/hugepages.sh@41 -- # echo 0 00:03:01.486 14:22:42 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:01.486 14:22:42 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:01.486 14:22:42 -- setup/hugepages.sh@41 -- # echo 0 00:03:01.486 14:22:42 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:01.486 14:22:42 -- setup/hugepages.sh@41 -- # echo 0 00:03:01.486 14:22:42 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:01.486 14:22:42 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:01.486 14:22:42 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:01.486 14:22:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:01.486 14:22:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:01.486 14:22:42 -- common/autotest_common.sh@10 -- # set +x 00:03:01.486 ************************************ 00:03:01.486 START TEST default_setup 00:03:01.486 ************************************ 00:03:01.486 14:22:43 -- common/autotest_common.sh@1104 -- # default_setup 00:03:01.486 14:22:43 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:01.486 14:22:43 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:01.486 14:22:43 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:01.486 14:22:43 -- setup/hugepages.sh@51 -- # shift 00:03:01.486 14:22:43 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:01.486 14:22:43 -- setup/hugepages.sh@52 -- # local node_ids 00:03:01.486 14:22:43 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:01.486 14:22:43 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:01.486 14:22:43 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:01.486 14:22:43 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:01.486 14:22:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:01.486 14:22:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:01.486 14:22:43 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:01.486 14:22:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:01.486 14:22:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:01.486 14:22:43 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:01.486 14:22:43 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:01.486 14:22:43 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:01.486 14:22:43 -- setup/hugepages.sh@73 -- # return 0 00:03:01.486 14:22:43 -- setup/hugepages.sh@137 -- # setup output 00:03:01.486 14:22:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.486 14:22:43 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:04.780 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:04.780 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:04.780 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:04.780 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:04.780 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:04.780 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:04.780 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:04.780 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:04.780 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:04.780 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:04.780 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:04.780 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:04.780 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:04.780 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:04.780 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:04.780 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:08.070 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:03:09.985 14:22:52 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:09.985 14:22:52 -- setup/hugepages.sh@89 -- # local node 00:03:09.985 14:22:52 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:09.985 14:22:52 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:09.985 14:22:52 -- setup/hugepages.sh@92 -- # local surp 00:03:09.985 14:22:52 -- setup/hugepages.sh@93 -- # local resv 00:03:09.985 14:22:52 -- setup/hugepages.sh@94 -- # local anon 00:03:09.985 14:22:52 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:09.985 14:22:52 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:09.985 14:22:52 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:09.985 14:22:52 -- setup/common.sh@18 -- # local node= 00:03:09.985 14:22:52 -- setup/common.sh@19 -- # local var val 00:03:09.985 14:22:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.985 14:22:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.985 14:22:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.985 14:22:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.985 14:22:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.985 14:22:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.985 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.985 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.985 14:22:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285420 kB' 'MemFree: 75772124 kB' 'MemAvailable: 79377224 kB' 'Buffers: 9752 kB' 'Cached: 11302604 kB' 'SwapCached: 0 kB' 'Active: 8111508 kB' 'Inactive: 3770068 kB' 'Active(anon): 7717100 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3770068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572576 kB' 'Mapped: 159012 kB' 'Shmem: 7147880 kB' 'KReclaimable: 187980 kB' 'Slab: 679624 kB' 'SReclaimable: 187980 kB' 'SUnreclaim: 491644 kB' 'KernelStack: 17456 kB' 'PageTables: 7824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482736 kB' 'Committed_AS: 8934664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212100 kB' 'VmallocChunk: 0 kB' 'Percpu: 47520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 529848 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 91226112 kB' 00:03:09.985 14:22:52 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.985 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.985 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.985 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.985 14:22:52 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.985 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.985 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.985 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.985 14:22:52 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.985 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.985 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.985 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.985 14:22:52 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.985 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.985 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.985 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.985 14:22:52 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.985 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.986 14:22:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.986 14:22:52 -- setup/common.sh@33 -- # echo 0 00:03:09.986 14:22:52 -- setup/common.sh@33 -- # return 0 00:03:09.986 14:22:52 -- setup/hugepages.sh@97 -- # anon=0 00:03:09.986 14:22:52 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:09.986 14:22:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.986 14:22:52 -- setup/common.sh@18 -- # local node= 00:03:09.986 14:22:52 -- setup/common.sh@19 -- # local var val 00:03:09.986 14:22:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.986 14:22:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.986 14:22:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.986 14:22:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.986 14:22:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.986 14:22:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.986 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285420 kB' 'MemFree: 75771876 kB' 'MemAvailable: 79376960 kB' 'Buffers: 9752 kB' 'Cached: 11302604 kB' 'SwapCached: 0 kB' 'Active: 8111444 kB' 'Inactive: 3770068 kB' 'Active(anon): 7717036 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3770068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572592 kB' 'Mapped: 159012 kB' 'Shmem: 7147880 kB' 'KReclaimable: 187948 kB' 'Slab: 679572 kB' 'SReclaimable: 187948 kB' 'SUnreclaim: 491624 kB' 'KernelStack: 17504 kB' 'PageTables: 7940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482736 kB' 'Committed_AS: 8934676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212100 kB' 'VmallocChunk: 0 kB' 'Percpu: 47520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 529848 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 91226112 kB' 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.987 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.987 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.988 14:22:52 -- setup/common.sh@33 -- # echo 0 00:03:09.988 14:22:52 -- setup/common.sh@33 -- # return 0 00:03:09.988 14:22:52 -- setup/hugepages.sh@99 -- # surp=0 00:03:09.988 14:22:52 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:09.988 14:22:52 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:09.988 14:22:52 -- setup/common.sh@18 -- # local node= 00:03:09.988 14:22:52 -- setup/common.sh@19 -- # local var val 00:03:09.988 14:22:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.988 14:22:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.988 14:22:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.988 14:22:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.988 14:22:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.988 14:22:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285420 kB' 'MemFree: 75772380 kB' 'MemAvailable: 79377464 kB' 'Buffers: 9752 kB' 'Cached: 11302604 kB' 'SwapCached: 0 kB' 'Active: 8111484 kB' 'Inactive: 3770068 kB' 'Active(anon): 7717076 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3770068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572628 kB' 'Mapped: 159012 kB' 'Shmem: 7147880 kB' 'KReclaimable: 187948 kB' 'Slab: 679572 kB' 'SReclaimable: 187948 kB' 'SUnreclaim: 491624 kB' 'KernelStack: 17520 kB' 'PageTables: 7988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482736 kB' 'Committed_AS: 8934692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212100 kB' 'VmallocChunk: 0 kB' 'Percpu: 47520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 529848 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 91226112 kB' 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.988 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.988 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.989 14:22:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.989 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.989 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.989 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.989 14:22:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.989 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.989 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.989 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.989 14:22:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.989 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.989 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.989 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.989 14:22:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.989 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.989 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.989 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.989 14:22:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.989 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.989 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.989 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.989 14:22:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.989 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.989 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.989 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.989 14:22:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.989 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.989 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.989 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.989 14:22:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.989 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.989 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.989 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.989 14:22:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.989 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.989 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.989 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.989 14:22:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.989 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.989 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.989 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.989 14:22:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.989 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.989 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.989 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.989 14:22:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.989 14:22:52 -- setup/common.sh@32 -- # continue 00:03:09.989 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.989 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.020 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.020 14:22:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.020 14:22:52 -- setup/common.sh@33 -- # echo 0 00:03:10.020 14:22:52 -- setup/common.sh@33 -- # return 0 00:03:10.020 14:22:52 -- setup/hugepages.sh@100 -- # resv=0 00:03:10.020 14:22:52 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:10.020 nr_hugepages=1024 00:03:10.020 14:22:52 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:10.020 resv_hugepages=0 00:03:10.020 14:22:52 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:10.020 surplus_hugepages=0 00:03:10.020 14:22:52 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:10.020 anon_hugepages=0 00:03:10.020 14:22:52 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.020 14:22:52 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:10.020 14:22:52 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:10.020 14:22:52 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:10.020 14:22:52 -- setup/common.sh@18 -- # local node= 00:03:10.020 14:22:52 -- setup/common.sh@19 -- # local var val 00:03:10.020 14:22:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.020 14:22:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.020 14:22:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.020 14:22:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.020 14:22:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.020 14:22:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.021 14:22:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285420 kB' 'MemFree: 75773160 kB' 'MemAvailable: 79378244 kB' 'Buffers: 9752 kB' 'Cached: 11302644 kB' 'SwapCached: 0 kB' 'Active: 8111152 kB' 'Inactive: 3770068 kB' 'Active(anon): 7716744 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3770068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572192 kB' 'Mapped: 159012 kB' 'Shmem: 7147920 kB' 'KReclaimable: 187948 kB' 'Slab: 679572 kB' 'SReclaimable: 187948 kB' 'SUnreclaim: 491624 kB' 'KernelStack: 17488 kB' 'PageTables: 7888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482736 kB' 'Committed_AS: 8934704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212100 kB' 'VmallocChunk: 0 kB' 'Percpu: 47520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 529848 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 91226112 kB' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.021 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.021 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.022 14:22:52 -- setup/common.sh@33 -- # echo 1024 00:03:10.022 14:22:52 -- setup/common.sh@33 -- # return 0 00:03:10.022 14:22:52 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.022 14:22:52 -- setup/hugepages.sh@112 -- # get_nodes 00:03:10.022 14:22:52 -- setup/hugepages.sh@27 -- # local node 00:03:10.022 14:22:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.022 14:22:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:10.022 14:22:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.022 14:22:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:10.022 14:22:52 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:10.022 14:22:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:10.022 14:22:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:10.022 14:22:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:10.022 14:22:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:10.022 14:22:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.022 14:22:52 -- setup/common.sh@18 -- # local node=0 00:03:10.022 14:22:52 -- setup/common.sh@19 -- # local var val 00:03:10.022 14:22:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.022 14:22:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.022 14:22:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:10.022 14:22:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:10.022 14:22:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.022 14:22:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 41834284 kB' 'MemUsed: 6230580 kB' 'SwapCached: 0 kB' 'Active: 2489616 kB' 'Inactive: 119896 kB' 'Active(anon): 2242848 kB' 'Inactive(anon): 0 kB' 'Active(file): 246768 kB' 'Inactive(file): 119896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2388136 kB' 'Mapped: 67096 kB' 'AnonPages: 224552 kB' 'Shmem: 2021472 kB' 'KernelStack: 10984 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102012 kB' 'Slab: 356792 kB' 'SReclaimable: 102012 kB' 'SUnreclaim: 254780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.022 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.022 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # continue 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.023 14:22:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.023 14:22:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.023 14:22:52 -- setup/common.sh@33 -- # echo 0 00:03:10.023 14:22:52 -- setup/common.sh@33 -- # return 0 00:03:10.023 14:22:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:10.023 14:22:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.023 14:22:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.023 14:22:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.023 14:22:52 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:10.023 node0=1024 expecting 1024 00:03:10.023 14:22:52 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:10.023 00:03:10.023 real 0m9.274s 00:03:10.023 user 0m2.229s 00:03:10.023 sys 0m3.961s 00:03:10.023 14:22:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:10.023 14:22:52 -- common/autotest_common.sh@10 -- # set +x 00:03:10.023 ************************************ 00:03:10.023 END TEST default_setup 00:03:10.023 ************************************ 00:03:10.023 14:22:52 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:10.023 14:22:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:10.023 14:22:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:10.023 14:22:52 -- common/autotest_common.sh@10 -- # set +x 00:03:10.023 ************************************ 00:03:10.023 START TEST per_node_1G_alloc 00:03:10.023 ************************************ 00:03:10.023 14:22:52 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:03:10.023 14:22:52 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:10.023 14:22:52 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:10.023 14:22:52 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:10.023 14:22:52 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:10.023 14:22:52 -- setup/hugepages.sh@51 -- # shift 00:03:10.023 14:22:52 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:10.023 14:22:52 -- setup/hugepages.sh@52 -- # local node_ids 00:03:10.023 14:22:52 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:10.023 14:22:52 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:10.023 14:22:52 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:10.023 14:22:52 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:10.023 14:22:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:10.023 14:22:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:10.023 14:22:52 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:10.023 14:22:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:10.023 14:22:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:10.023 14:22:52 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:10.023 14:22:52 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:10.023 14:22:52 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:10.023 14:22:52 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:10.023 14:22:52 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:10.023 14:22:52 -- setup/hugepages.sh@73 -- # return 0 00:03:10.023 14:22:52 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:10.023 14:22:52 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:10.023 14:22:52 -- setup/hugepages.sh@146 -- # setup output 00:03:10.023 14:22:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.023 14:22:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:14.223 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:14.223 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:14.223 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:14.223 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:14.223 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:14.223 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:14.223 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:14.223 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:14.223 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:14.223 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:14.223 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:14.223 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:14.223 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:14.223 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:14.223 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:14.223 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:14.223 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:15.604 14:22:58 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:15.604 14:22:58 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:15.604 14:22:58 -- setup/hugepages.sh@89 -- # local node 00:03:15.604 14:22:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:15.604 14:22:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:15.604 14:22:58 -- setup/hugepages.sh@92 -- # local surp 00:03:15.604 14:22:58 -- setup/hugepages.sh@93 -- # local resv 00:03:15.604 14:22:58 -- setup/hugepages.sh@94 -- # local anon 00:03:15.604 14:22:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:15.604 14:22:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:15.604 14:22:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:15.604 14:22:58 -- setup/common.sh@18 -- # local node= 00:03:15.604 14:22:58 -- setup/common.sh@19 -- # local var val 00:03:15.604 14:22:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.604 14:22:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.604 14:22:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.604 14:22:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.604 14:22:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.604 14:22:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.604 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.604 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.605 14:22:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285420 kB' 'MemFree: 75790560 kB' 'MemAvailable: 79395660 kB' 'Buffers: 9752 kB' 'Cached: 11302752 kB' 'SwapCached: 0 kB' 'Active: 8112028 kB' 'Inactive: 3770068 kB' 'Active(anon): 7717620 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3770068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572924 kB' 'Mapped: 158208 kB' 'Shmem: 7148028 kB' 'KReclaimable: 187980 kB' 'Slab: 680036 kB' 'SReclaimable: 187980 kB' 'SUnreclaim: 492056 kB' 'KernelStack: 17456 kB' 'PageTables: 7704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482736 kB' 'Committed_AS: 8927924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212164 kB' 'VmallocChunk: 0 kB' 'Percpu: 47520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 529848 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 91226112 kB' 00:03:15.605 14:22:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.605 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.605 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.605 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.605 14:22:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.605 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.605 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.605 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.605 14:22:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.605 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.605 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.605 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.605 14:22:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.605 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.605 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.605 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.605 14:22:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.605 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.605 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.605 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.605 14:22:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.868 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 14:22:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.869 14:22:58 -- setup/common.sh@33 -- # echo 0 00:03:15.869 14:22:58 -- setup/common.sh@33 -- # return 0 00:03:15.869 14:22:58 -- setup/hugepages.sh@97 -- # anon=0 00:03:15.869 14:22:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:15.869 14:22:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.869 14:22:58 -- setup/common.sh@18 -- # local node= 00:03:15.869 14:22:58 -- setup/common.sh@19 -- # local var val 00:03:15.869 14:22:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.869 14:22:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.869 14:22:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.869 14:22:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.869 14:22:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.869 14:22:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285420 kB' 'MemFree: 75792888 kB' 'MemAvailable: 79397988 kB' 'Buffers: 9752 kB' 'Cached: 11302756 kB' 'SwapCached: 0 kB' 'Active: 8112316 kB' 'Inactive: 3770068 kB' 'Active(anon): 7717908 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3770068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573212 kB' 'Mapped: 158204 kB' 'Shmem: 7148032 kB' 'KReclaimable: 187980 kB' 'Slab: 680004 kB' 'SReclaimable: 187980 kB' 'SUnreclaim: 492024 kB' 'KernelStack: 17488 kB' 'PageTables: 7884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482736 kB' 'Committed_AS: 8928868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212196 kB' 'VmallocChunk: 0 kB' 'Percpu: 47520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 529848 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 91226112 kB' 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.870 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.870 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.871 14:22:58 -- setup/common.sh@33 -- # echo 0 00:03:15.871 14:22:58 -- setup/common.sh@33 -- # return 0 00:03:15.871 14:22:58 -- setup/hugepages.sh@99 -- # surp=0 00:03:15.871 14:22:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:15.871 14:22:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:15.871 14:22:58 -- setup/common.sh@18 -- # local node= 00:03:15.871 14:22:58 -- setup/common.sh@19 -- # local var val 00:03:15.871 14:22:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.871 14:22:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.871 14:22:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.871 14:22:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.871 14:22:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.871 14:22:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285420 kB' 'MemFree: 75791572 kB' 'MemAvailable: 79396672 kB' 'Buffers: 9752 kB' 'Cached: 11302768 kB' 'SwapCached: 0 kB' 'Active: 8112220 kB' 'Inactive: 3770068 kB' 'Active(anon): 7717812 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3770068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573052 kB' 'Mapped: 158204 kB' 'Shmem: 7148044 kB' 'KReclaimable: 187980 kB' 'Slab: 680048 kB' 'SReclaimable: 187980 kB' 'SUnreclaim: 492068 kB' 'KernelStack: 17584 kB' 'PageTables: 8024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482736 kB' 'Committed_AS: 8928884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212244 kB' 'VmallocChunk: 0 kB' 'Percpu: 47520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 529848 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 91226112 kB' 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.871 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 14:22:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.872 14:22:58 -- setup/common.sh@33 -- # echo 0 00:03:15.872 14:22:58 -- setup/common.sh@33 -- # return 0 00:03:15.872 14:22:58 -- setup/hugepages.sh@100 -- # resv=0 00:03:15.872 14:22:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:15.872 nr_hugepages=1024 00:03:15.872 14:22:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:15.872 resv_hugepages=0 00:03:15.872 14:22:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:15.872 surplus_hugepages=0 00:03:15.872 14:22:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:15.872 anon_hugepages=0 00:03:15.872 14:22:58 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.872 14:22:58 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:15.872 14:22:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:15.872 14:22:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:15.872 14:22:58 -- setup/common.sh@18 -- # local node= 00:03:15.872 14:22:58 -- setup/common.sh@19 -- # local var val 00:03:15.872 14:22:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.872 14:22:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.872 14:22:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.872 14:22:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.872 14:22:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.872 14:22:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285420 kB' 'MemFree: 75792492 kB' 'MemAvailable: 79397592 kB' 'Buffers: 9752 kB' 'Cached: 11302772 kB' 'SwapCached: 0 kB' 'Active: 8112252 kB' 'Inactive: 3770068 kB' 'Active(anon): 7717844 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3770068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573108 kB' 'Mapped: 158204 kB' 'Shmem: 7148048 kB' 'KReclaimable: 187980 kB' 'Slab: 680020 kB' 'SReclaimable: 187980 kB' 'SUnreclaim: 492040 kB' 'KernelStack: 17520 kB' 'PageTables: 7452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482736 kB' 'Committed_AS: 8927636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212244 kB' 'VmallocChunk: 0 kB' 'Percpu: 47520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 529848 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 91226112 kB' 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.873 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.873 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.874 14:22:58 -- setup/common.sh@33 -- # echo 1024 00:03:15.874 14:22:58 -- setup/common.sh@33 -- # return 0 00:03:15.874 14:22:58 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.874 14:22:58 -- setup/hugepages.sh@112 -- # get_nodes 00:03:15.874 14:22:58 -- setup/hugepages.sh@27 -- # local node 00:03:15.874 14:22:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.874 14:22:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:15.874 14:22:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.874 14:22:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:15.874 14:22:58 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:15.874 14:22:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.874 14:22:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.874 14:22:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.874 14:22:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:15.874 14:22:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.874 14:22:58 -- setup/common.sh@18 -- # local node=0 00:03:15.874 14:22:58 -- setup/common.sh@19 -- # local var val 00:03:15.874 14:22:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.874 14:22:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.874 14:22:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:15.874 14:22:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:15.874 14:22:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.874 14:22:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 42892072 kB' 'MemUsed: 5172792 kB' 'SwapCached: 0 kB' 'Active: 2489128 kB' 'Inactive: 119896 kB' 'Active(anon): 2242360 kB' 'Inactive(anon): 0 kB' 'Active(file): 246768 kB' 'Inactive(file): 119896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2388200 kB' 'Mapped: 66880 kB' 'AnonPages: 223960 kB' 'Shmem: 2021536 kB' 'KernelStack: 10904 kB' 'PageTables: 3864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102044 kB' 'Slab: 357308 kB' 'SReclaimable: 102044 kB' 'SUnreclaim: 255264 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.874 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.874 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@33 -- # echo 0 00:03:15.875 14:22:58 -- setup/common.sh@33 -- # return 0 00:03:15.875 14:22:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.875 14:22:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.875 14:22:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.875 14:22:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:15.875 14:22:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.875 14:22:58 -- setup/common.sh@18 -- # local node=1 00:03:15.875 14:22:58 -- setup/common.sh@19 -- # local var val 00:03:15.875 14:22:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.875 14:22:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.875 14:22:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:15.875 14:22:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:15.875 14:22:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.875 14:22:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220556 kB' 'MemFree: 32895560 kB' 'MemUsed: 11324996 kB' 'SwapCached: 0 kB' 'Active: 5626008 kB' 'Inactive: 3650172 kB' 'Active(anon): 5478368 kB' 'Inactive(anon): 0 kB' 'Active(file): 147640 kB' 'Inactive(file): 3650172 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8924348 kB' 'Mapped: 91760 kB' 'AnonPages: 352072 kB' 'Shmem: 5126536 kB' 'KernelStack: 6584 kB' 'PageTables: 3732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85936 kB' 'Slab: 322704 kB' 'SReclaimable: 85936 kB' 'SUnreclaim: 236768 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.875 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.875 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # continue 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 14:22:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 14:22:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.876 14:22:58 -- setup/common.sh@33 -- # echo 0 00:03:15.876 14:22:58 -- setup/common.sh@33 -- # return 0 00:03:15.876 14:22:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.876 14:22:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.876 14:22:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.876 14:22:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.876 14:22:58 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:15.876 node0=512 expecting 512 00:03:15.876 14:22:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.876 14:22:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.876 14:22:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.876 14:22:58 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:15.876 node1=512 expecting 512 00:03:15.876 14:22:58 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:15.876 00:03:15.876 real 0m5.956s 00:03:15.876 user 0m2.221s 00:03:15.876 sys 0m3.794s 00:03:15.876 14:22:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:15.876 14:22:58 -- common/autotest_common.sh@10 -- # set +x 00:03:15.876 ************************************ 00:03:15.876 END TEST per_node_1G_alloc 00:03:15.876 ************************************ 00:03:15.876 14:22:58 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:15.876 14:22:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:15.876 14:22:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:15.876 14:22:58 -- common/autotest_common.sh@10 -- # set +x 00:03:15.876 ************************************ 00:03:15.876 START TEST even_2G_alloc 00:03:15.876 ************************************ 00:03:15.876 14:22:58 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:03:15.876 14:22:58 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:15.876 14:22:58 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:15.876 14:22:58 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:15.876 14:22:58 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:15.876 14:22:58 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:15.876 14:22:58 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:15.876 14:22:58 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:15.876 14:22:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.876 14:22:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:15.876 14:22:58 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.876 14:22:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.876 14:22:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.876 14:22:58 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:15.876 14:22:58 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:15.876 14:22:58 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.876 14:22:58 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:15.876 14:22:58 -- setup/hugepages.sh@83 -- # : 512 00:03:15.876 14:22:58 -- setup/hugepages.sh@84 -- # : 1 00:03:15.876 14:22:58 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.876 14:22:58 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:15.876 14:22:58 -- setup/hugepages.sh@83 -- # : 0 00:03:15.876 14:22:58 -- setup/hugepages.sh@84 -- # : 0 00:03:15.876 14:22:58 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.876 14:22:58 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:15.876 14:22:58 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:15.876 14:22:58 -- setup/hugepages.sh@153 -- # setup output 00:03:15.876 14:22:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.877 14:22:58 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:20.075 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:20.075 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:20.075 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:20.075 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:20.075 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:20.075 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:20.075 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:20.075 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:20.075 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:20.075 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:20.075 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:20.075 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:20.075 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:20.075 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:20.075 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:20.075 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:20.075 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:21.988 14:23:04 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:21.988 14:23:04 -- setup/hugepages.sh@89 -- # local node 00:03:21.988 14:23:04 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:21.988 14:23:04 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:21.988 14:23:04 -- setup/hugepages.sh@92 -- # local surp 00:03:21.988 14:23:04 -- setup/hugepages.sh@93 -- # local resv 00:03:21.988 14:23:04 -- setup/hugepages.sh@94 -- # local anon 00:03:21.988 14:23:04 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:21.988 14:23:04 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:21.988 14:23:04 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:21.988 14:23:04 -- setup/common.sh@18 -- # local node= 00:03:21.988 14:23:04 -- setup/common.sh@19 -- # local var val 00:03:21.988 14:23:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.988 14:23:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.988 14:23:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.988 14:23:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.988 14:23:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.988 14:23:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.988 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.988 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.988 14:23:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285420 kB' 'MemFree: 75807704 kB' 'MemAvailable: 79412776 kB' 'Buffers: 9752 kB' 'Cached: 11302924 kB' 'SwapCached: 0 kB' 'Active: 8112324 kB' 'Inactive: 3770068 kB' 'Active(anon): 7717916 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3770068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573064 kB' 'Mapped: 158276 kB' 'Shmem: 7148200 kB' 'KReclaimable: 187924 kB' 'Slab: 679392 kB' 'SReclaimable: 187924 kB' 'SUnreclaim: 491468 kB' 'KernelStack: 17408 kB' 'PageTables: 7568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482736 kB' 'Committed_AS: 8925988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212244 kB' 'VmallocChunk: 0 kB' 'Percpu: 47520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 529848 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 91226112 kB' 00:03:21.988 14:23:04 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.988 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.988 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.988 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.988 14:23:04 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.988 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.988 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.988 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.988 14:23:04 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.988 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.988 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.988 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.988 14:23:04 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.988 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.988 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.988 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.988 14:23:04 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.988 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.988 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.988 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.989 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.989 14:23:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.990 14:23:04 -- setup/common.sh@33 -- # echo 0 00:03:21.990 14:23:04 -- setup/common.sh@33 -- # return 0 00:03:21.990 14:23:04 -- setup/hugepages.sh@97 -- # anon=0 00:03:21.990 14:23:04 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:21.990 14:23:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.990 14:23:04 -- setup/common.sh@18 -- # local node= 00:03:21.990 14:23:04 -- setup/common.sh@19 -- # local var val 00:03:21.990 14:23:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.990 14:23:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.990 14:23:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.990 14:23:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.990 14:23:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.990 14:23:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.990 14:23:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285420 kB' 'MemFree: 75806844 kB' 'MemAvailable: 79411916 kB' 'Buffers: 9752 kB' 'Cached: 11302928 kB' 'SwapCached: 0 kB' 'Active: 8111820 kB' 'Inactive: 3770068 kB' 'Active(anon): 7717412 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3770068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572592 kB' 'Mapped: 158272 kB' 'Shmem: 7148204 kB' 'KReclaimable: 187924 kB' 'Slab: 679408 kB' 'SReclaimable: 187924 kB' 'SUnreclaim: 491484 kB' 'KernelStack: 17440 kB' 'PageTables: 7652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482736 kB' 'Committed_AS: 8926000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212196 kB' 'VmallocChunk: 0 kB' 'Percpu: 47520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 529848 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 91226112 kB' 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.990 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.990 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.991 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.991 14:23:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.991 14:23:04 -- setup/common.sh@33 -- # echo 0 00:03:21.991 14:23:04 -- setup/common.sh@33 -- # return 0 00:03:21.991 14:23:04 -- setup/hugepages.sh@99 -- # surp=0 00:03:21.991 14:23:04 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:21.991 14:23:04 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:21.991 14:23:04 -- setup/common.sh@18 -- # local node= 00:03:21.991 14:23:04 -- setup/common.sh@19 -- # local var val 00:03:21.991 14:23:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.991 14:23:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.992 14:23:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.992 14:23:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.992 14:23:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.992 14:23:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.992 14:23:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285420 kB' 'MemFree: 75806848 kB' 'MemAvailable: 79411920 kB' 'Buffers: 9752 kB' 'Cached: 11302940 kB' 'SwapCached: 0 kB' 'Active: 8111844 kB' 'Inactive: 3770068 kB' 'Active(anon): 7717436 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3770068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572592 kB' 'Mapped: 158272 kB' 'Shmem: 7148216 kB' 'KReclaimable: 187924 kB' 'Slab: 679408 kB' 'SReclaimable: 187924 kB' 'SUnreclaim: 491484 kB' 'KernelStack: 17440 kB' 'PageTables: 7652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482736 kB' 'Committed_AS: 8926012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212196 kB' 'VmallocChunk: 0 kB' 'Percpu: 47520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 529848 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 91226112 kB' 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.992 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.992 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.993 14:23:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.993 14:23:04 -- setup/common.sh@33 -- # echo 0 00:03:21.993 14:23:04 -- setup/common.sh@33 -- # return 0 00:03:21.993 14:23:04 -- setup/hugepages.sh@100 -- # resv=0 00:03:21.993 14:23:04 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:21.993 nr_hugepages=1024 00:03:21.993 14:23:04 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:21.993 resv_hugepages=0 00:03:21.993 14:23:04 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:21.993 surplus_hugepages=0 00:03:21.993 14:23:04 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:21.993 anon_hugepages=0 00:03:21.993 14:23:04 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.993 14:23:04 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:21.993 14:23:04 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:21.993 14:23:04 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:21.993 14:23:04 -- setup/common.sh@18 -- # local node= 00:03:21.993 14:23:04 -- setup/common.sh@19 -- # local var val 00:03:21.993 14:23:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.993 14:23:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.993 14:23:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.993 14:23:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.993 14:23:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.993 14:23:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.993 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.994 14:23:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285420 kB' 'MemFree: 75806092 kB' 'MemAvailable: 79411164 kB' 'Buffers: 9752 kB' 'Cached: 11302956 kB' 'SwapCached: 0 kB' 'Active: 8112216 kB' 'Inactive: 3770068 kB' 'Active(anon): 7717808 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3770068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572980 kB' 'Mapped: 158776 kB' 'Shmem: 7148232 kB' 'KReclaimable: 187924 kB' 'Slab: 679408 kB' 'SReclaimable: 187924 kB' 'SUnreclaim: 491484 kB' 'KernelStack: 17424 kB' 'PageTables: 7608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482736 kB' 'Committed_AS: 8927384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212212 kB' 'VmallocChunk: 0 kB' 'Percpu: 47520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 529848 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 91226112 kB' 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.994 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.994 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.995 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.995 14:23:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.995 14:23:04 -- setup/common.sh@33 -- # echo 1024 00:03:21.995 14:23:04 -- setup/common.sh@33 -- # return 0 00:03:21.995 14:23:04 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.995 14:23:04 -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.995 14:23:04 -- setup/hugepages.sh@27 -- # local node 00:03:21.995 14:23:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.995 14:23:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:21.995 14:23:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.995 14:23:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:21.995 14:23:04 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.995 14:23:04 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.995 14:23:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.995 14:23:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.996 14:23:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.996 14:23:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.996 14:23:04 -- setup/common.sh@18 -- # local node=0 00:03:21.996 14:23:04 -- setup/common.sh@19 -- # local var val 00:03:21.996 14:23:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.996 14:23:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.996 14:23:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.996 14:23:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.996 14:23:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.996 14:23:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.996 14:23:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 42897500 kB' 'MemUsed: 5167364 kB' 'SwapCached: 0 kB' 'Active: 2488864 kB' 'Inactive: 119896 kB' 'Active(anon): 2242096 kB' 'Inactive(anon): 0 kB' 'Active(file): 246768 kB' 'Inactive(file): 119896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2388232 kB' 'Mapped: 66952 kB' 'AnonPages: 223684 kB' 'Shmem: 2021568 kB' 'KernelStack: 10952 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101980 kB' 'Slab: 356808 kB' 'SReclaimable: 101980 kB' 'SUnreclaim: 254828 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.996 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.996 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.997 14:23:04 -- setup/common.sh@33 -- # echo 0 00:03:21.997 14:23:04 -- setup/common.sh@33 -- # return 0 00:03:21.997 14:23:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.997 14:23:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.997 14:23:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.997 14:23:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:21.997 14:23:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.997 14:23:04 -- setup/common.sh@18 -- # local node=1 00:03:21.997 14:23:04 -- setup/common.sh@19 -- # local var val 00:03:21.997 14:23:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.997 14:23:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.997 14:23:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:21.997 14:23:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:21.997 14:23:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.997 14:23:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.997 14:23:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220556 kB' 'MemFree: 32904848 kB' 'MemUsed: 11315708 kB' 'SwapCached: 0 kB' 'Active: 5622988 kB' 'Inactive: 3650172 kB' 'Active(anon): 5475348 kB' 'Inactive(anon): 0 kB' 'Active(file): 147640 kB' 'Inactive(file): 3650172 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8924504 kB' 'Mapped: 91320 kB' 'AnonPages: 348904 kB' 'Shmem: 5126692 kB' 'KernelStack: 6488 kB' 'PageTables: 3564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85944 kB' 'Slab: 322600 kB' 'SReclaimable: 85944 kB' 'SUnreclaim: 236656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.997 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.997 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # continue 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.998 14:23:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.998 14:23:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.998 14:23:04 -- setup/common.sh@33 -- # echo 0 00:03:21.998 14:23:04 -- setup/common.sh@33 -- # return 0 00:03:21.998 14:23:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.998 14:23:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.998 14:23:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.998 14:23:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.998 14:23:04 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:21.998 node0=512 expecting 512 00:03:21.998 14:23:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.998 14:23:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.998 14:23:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.998 14:23:04 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:21.998 node1=512 expecting 512 00:03:21.998 14:23:04 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:21.998 00:03:21.998 real 0m5.993s 00:03:21.998 user 0m2.244s 00:03:21.998 sys 0m3.814s 00:03:21.998 14:23:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:21.998 14:23:04 -- common/autotest_common.sh@10 -- # set +x 00:03:21.998 ************************************ 00:03:21.998 END TEST even_2G_alloc 00:03:21.998 ************************************ 00:03:21.998 14:23:04 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:21.998 14:23:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:21.998 14:23:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:21.998 14:23:04 -- common/autotest_common.sh@10 -- # set +x 00:03:21.998 ************************************ 00:03:21.998 START TEST odd_alloc 00:03:21.998 ************************************ 00:03:21.998 14:23:04 -- common/autotest_common.sh@1104 -- # odd_alloc 00:03:21.998 14:23:04 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:21.998 14:23:04 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:21.998 14:23:04 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:21.998 14:23:04 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:21.998 14:23:04 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:21.998 14:23:04 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:21.998 14:23:04 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:21.998 14:23:04 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.998 14:23:04 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:21.998 14:23:04 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.998 14:23:04 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.998 14:23:04 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.999 14:23:04 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:21.999 14:23:04 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:21.999 14:23:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.999 14:23:04 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:21.999 14:23:04 -- setup/hugepages.sh@83 -- # : 513 00:03:21.999 14:23:04 -- setup/hugepages.sh@84 -- # : 1 00:03:21.999 14:23:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.999 14:23:04 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:21.999 14:23:04 -- setup/hugepages.sh@83 -- # : 0 00:03:21.999 14:23:04 -- setup/hugepages.sh@84 -- # : 0 00:03:21.999 14:23:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.999 14:23:04 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:21.999 14:23:04 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:21.999 14:23:04 -- setup/hugepages.sh@160 -- # setup output 00:03:21.999 14:23:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.999 14:23:04 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:26.201 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.201 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:26.201 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.201 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.201 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.201 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.201 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.201 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.201 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.201 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.201 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.201 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.201 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.201 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.201 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.201 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.201 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:28.114 14:23:10 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:28.114 14:23:10 -- setup/hugepages.sh@89 -- # local node 00:03:28.114 14:23:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.114 14:23:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.114 14:23:10 -- setup/hugepages.sh@92 -- # local surp 00:03:28.114 14:23:10 -- setup/hugepages.sh@93 -- # local resv 00:03:28.114 14:23:10 -- setup/hugepages.sh@94 -- # local anon 00:03:28.114 14:23:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:28.114 14:23:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.114 14:23:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.114 14:23:10 -- setup/common.sh@18 -- # local node= 00:03:28.114 14:23:10 -- setup/common.sh@19 -- # local var val 00:03:28.114 14:23:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.114 14:23:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.114 14:23:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.114 14:23:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.114 14:23:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.114 14:23:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.114 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.114 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285420 kB' 'MemFree: 75809944 kB' 'MemAvailable: 79415016 kB' 'Buffers: 9752 kB' 'Cached: 11303084 kB' 'SwapCached: 0 kB' 'Active: 8116036 kB' 'Inactive: 3770068 kB' 'Active(anon): 7721628 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3770068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576764 kB' 'Mapped: 159368 kB' 'Shmem: 7148360 kB' 'KReclaimable: 187924 kB' 'Slab: 678932 kB' 'SReclaimable: 187924 kB' 'SUnreclaim: 491008 kB' 'KernelStack: 17504 kB' 'PageTables: 7664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481712 kB' 'Committed_AS: 8961044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212180 kB' 'VmallocChunk: 0 kB' 'Percpu: 47520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 529848 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 91226112 kB' 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.115 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.115 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.116 14:23:10 -- setup/common.sh@33 -- # echo 0 00:03:28.116 14:23:10 -- setup/common.sh@33 -- # return 0 00:03:28.116 14:23:10 -- setup/hugepages.sh@97 -- # anon=0 00:03:28.116 14:23:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.116 14:23:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.116 14:23:10 -- setup/common.sh@18 -- # local node= 00:03:28.116 14:23:10 -- setup/common.sh@19 -- # local var val 00:03:28.116 14:23:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.116 14:23:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.116 14:23:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.116 14:23:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.116 14:23:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.116 14:23:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285420 kB' 'MemFree: 75809848 kB' 'MemAvailable: 79414920 kB' 'Buffers: 9752 kB' 'Cached: 11303084 kB' 'SwapCached: 0 kB' 'Active: 8115348 kB' 'Inactive: 3770068 kB' 'Active(anon): 7720940 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3770068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576120 kB' 'Mapped: 159364 kB' 'Shmem: 7148360 kB' 'KReclaimable: 187924 kB' 'Slab: 679028 kB' 'SReclaimable: 187924 kB' 'SUnreclaim: 491104 kB' 'KernelStack: 17536 kB' 'PageTables: 7752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481712 kB' 'Committed_AS: 8961056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212148 kB' 'VmallocChunk: 0 kB' 'Percpu: 47520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 529848 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 91226112 kB' 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.116 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.116 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.117 14:23:10 -- setup/common.sh@33 -- # echo 0 00:03:28.117 14:23:10 -- setup/common.sh@33 -- # return 0 00:03:28.117 14:23:10 -- setup/hugepages.sh@99 -- # surp=0 00:03:28.117 14:23:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.117 14:23:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.117 14:23:10 -- setup/common.sh@18 -- # local node= 00:03:28.117 14:23:10 -- setup/common.sh@19 -- # local var val 00:03:28.117 14:23:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.117 14:23:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.117 14:23:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.117 14:23:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.117 14:23:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.117 14:23:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285420 kB' 'MemFree: 75809852 kB' 'MemAvailable: 79414924 kB' 'Buffers: 9752 kB' 'Cached: 11303108 kB' 'SwapCached: 0 kB' 'Active: 8115288 kB' 'Inactive: 3770068 kB' 'Active(anon): 7720880 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3770068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575996 kB' 'Mapped: 159364 kB' 'Shmem: 7148384 kB' 'KReclaimable: 187924 kB' 'Slab: 679028 kB' 'SReclaimable: 187924 kB' 'SUnreclaim: 491104 kB' 'KernelStack: 17536 kB' 'PageTables: 7752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481712 kB' 'Committed_AS: 8961068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212164 kB' 'VmallocChunk: 0 kB' 'Percpu: 47520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 529848 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 91226112 kB' 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.117 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.117 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 14:23:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.118 14:23:10 -- setup/common.sh@33 -- # echo 0 00:03:28.118 14:23:10 -- setup/common.sh@33 -- # return 0 00:03:28.118 14:23:10 -- setup/hugepages.sh@100 -- # resv=0 00:03:28.118 14:23:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:28.118 nr_hugepages=1025 00:03:28.118 14:23:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:28.118 resv_hugepages=0 00:03:28.118 14:23:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:28.118 surplus_hugepages=0 00:03:28.118 14:23:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:28.118 anon_hugepages=0 00:03:28.118 14:23:10 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:28.118 14:23:10 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:28.118 14:23:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:28.118 14:23:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:28.118 14:23:10 -- setup/common.sh@18 -- # local node= 00:03:28.118 14:23:10 -- setup/common.sh@19 -- # local var val 00:03:28.118 14:23:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.118 14:23:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.118 14:23:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.118 14:23:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.118 14:23:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.118 14:23:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.118 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285420 kB' 'MemFree: 75810356 kB' 'MemAvailable: 79415428 kB' 'Buffers: 9752 kB' 'Cached: 11303112 kB' 'SwapCached: 0 kB' 'Active: 8115084 kB' 'Inactive: 3770068 kB' 'Active(anon): 7720676 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3770068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575816 kB' 'Mapped: 159364 kB' 'Shmem: 7148388 kB' 'KReclaimable: 187924 kB' 'Slab: 679028 kB' 'SReclaimable: 187924 kB' 'SUnreclaim: 491104 kB' 'KernelStack: 17536 kB' 'PageTables: 7752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481712 kB' 'Committed_AS: 8961084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212180 kB' 'VmallocChunk: 0 kB' 'Percpu: 47520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 529848 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 91226112 kB' 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.119 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 14:23:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.120 14:23:10 -- setup/common.sh@33 -- # echo 1025 00:03:28.120 14:23:10 -- setup/common.sh@33 -- # return 0 00:03:28.120 14:23:10 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:28.120 14:23:10 -- setup/hugepages.sh@112 -- # get_nodes 00:03:28.120 14:23:10 -- setup/hugepages.sh@27 -- # local node 00:03:28.120 14:23:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.120 14:23:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:28.120 14:23:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.120 14:23:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:28.120 14:23:10 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:28.120 14:23:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.120 14:23:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.120 14:23:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.120 14:23:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:28.120 14:23:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.120 14:23:10 -- setup/common.sh@18 -- # local node=0 00:03:28.120 14:23:10 -- setup/common.sh@19 -- # local var val 00:03:28.120 14:23:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.121 14:23:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.121 14:23:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:28.121 14:23:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:28.121 14:23:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.121 14:23:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 42899128 kB' 'MemUsed: 5165736 kB' 'SwapCached: 0 kB' 'Active: 2490996 kB' 'Inactive: 119896 kB' 'Active(anon): 2244228 kB' 'Inactive(anon): 0 kB' 'Active(file): 246768 kB' 'Inactive(file): 119896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2388264 kB' 'Mapped: 67256 kB' 'AnonPages: 225868 kB' 'Shmem: 2021600 kB' 'KernelStack: 11000 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101980 kB' 'Slab: 356776 kB' 'SReclaimable: 101980 kB' 'SUnreclaim: 254796 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@33 -- # echo 0 00:03:28.122 14:23:10 -- setup/common.sh@33 -- # return 0 00:03:28.122 14:23:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.122 14:23:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.122 14:23:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.122 14:23:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:28.122 14:23:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.122 14:23:10 -- setup/common.sh@18 -- # local node=1 00:03:28.122 14:23:10 -- setup/common.sh@19 -- # local var val 00:03:28.122 14:23:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.122 14:23:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.122 14:23:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:28.122 14:23:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:28.122 14:23:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.122 14:23:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220556 kB' 'MemFree: 32911700 kB' 'MemUsed: 11308856 kB' 'SwapCached: 0 kB' 'Active: 5624028 kB' 'Inactive: 3650172 kB' 'Active(anon): 5476388 kB' 'Inactive(anon): 0 kB' 'Active(file): 147640 kB' 'Inactive(file): 3650172 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8924628 kB' 'Mapped: 92108 kB' 'AnonPages: 349776 kB' 'Shmem: 5126816 kB' 'KernelStack: 6536 kB' 'PageTables: 3548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85944 kB' 'Slab: 322252 kB' 'SReclaimable: 85944 kB' 'SUnreclaim: 236308 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 14:23:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.123 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.123 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 14:23:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.123 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.123 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 14:23:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.123 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.123 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 14:23:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.123 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.123 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 14:23:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.123 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.123 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 14:23:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.123 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.123 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 14:23:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.123 14:23:10 -- setup/common.sh@32 -- # continue 00:03:28.123 14:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 14:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 14:23:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.123 14:23:10 -- setup/common.sh@33 -- # echo 0 00:03:28.123 14:23:10 -- setup/common.sh@33 -- # return 0 00:03:28.123 14:23:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.123 14:23:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.123 14:23:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.123 14:23:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.123 14:23:10 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:28.123 node0=512 expecting 513 00:03:28.123 14:23:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.123 14:23:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.123 14:23:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.123 14:23:10 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:28.123 node1=513 expecting 512 00:03:28.123 14:23:10 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:28.123 00:03:28.123 real 0m5.972s 00:03:28.123 user 0m2.222s 00:03:28.123 sys 0m3.817s 00:03:28.123 14:23:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.123 14:23:10 -- common/autotest_common.sh@10 -- # set +x 00:03:28.123 ************************************ 00:03:28.123 END TEST odd_alloc 00:03:28.123 ************************************ 00:03:28.123 14:23:10 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:28.123 14:23:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:28.123 14:23:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:28.123 14:23:10 -- common/autotest_common.sh@10 -- # set +x 00:03:28.123 ************************************ 00:03:28.123 START TEST custom_alloc 00:03:28.123 ************************************ 00:03:28.123 14:23:10 -- common/autotest_common.sh@1104 -- # custom_alloc 00:03:28.123 14:23:10 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:28.123 14:23:10 -- setup/hugepages.sh@169 -- # local node 00:03:28.123 14:23:10 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:28.123 14:23:10 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:28.123 14:23:10 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:28.123 14:23:10 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:28.123 14:23:10 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:28.123 14:23:10 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:28.123 14:23:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:28.123 14:23:10 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:28.123 14:23:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:28.123 14:23:10 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:28.123 14:23:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:28.123 14:23:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:28.123 14:23:10 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:28.123 14:23:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:28.123 14:23:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:28.123 14:23:10 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:28.123 14:23:10 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:28.123 14:23:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:28.123 14:23:10 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:28.123 14:23:10 -- setup/hugepages.sh@83 -- # : 256 00:03:28.123 14:23:10 -- setup/hugepages.sh@84 -- # : 1 00:03:28.123 14:23:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:28.123 14:23:10 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:28.123 14:23:10 -- setup/hugepages.sh@83 -- # : 0 00:03:28.123 14:23:10 -- setup/hugepages.sh@84 -- # : 0 00:03:28.123 14:23:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:28.123 14:23:10 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:28.123 14:23:10 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:28.123 14:23:10 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:28.123 14:23:10 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:28.123 14:23:10 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:28.123 14:23:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:28.123 14:23:10 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:28.123 14:23:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:28.123 14:23:10 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:28.123 14:23:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:28.123 14:23:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:28.123 14:23:10 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:28.123 14:23:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:28.123 14:23:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:28.123 14:23:10 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:28.123 14:23:10 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:28.123 14:23:10 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:28.123 14:23:10 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:28.123 14:23:10 -- setup/hugepages.sh@78 -- # return 0 00:03:28.123 14:23:10 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:28.123 14:23:10 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:28.123 14:23:10 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:28.123 14:23:10 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:28.123 14:23:10 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:28.123 14:23:10 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:28.123 14:23:10 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:28.123 14:23:10 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:28.123 14:23:10 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:28.123 14:23:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:28.123 14:23:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:28.123 14:23:10 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:28.123 14:23:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:28.123 14:23:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:28.123 14:23:10 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:28.123 14:23:10 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:28.123 14:23:10 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:28.123 14:23:10 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:28.123 14:23:10 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:28.123 14:23:10 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:28.123 14:23:10 -- setup/hugepages.sh@78 -- # return 0 00:03:28.123 14:23:10 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:28.123 14:23:10 -- setup/hugepages.sh@187 -- # setup output 00:03:28.123 14:23:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.123 14:23:10 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:32.320 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:32.320 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:32.320 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:32.320 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:32.320 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:32.320 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:32.320 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:32.320 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:32.320 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:32.320 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:32.320 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:32.320 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:32.320 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:32.320 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:32.320 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:32.320 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:32.320 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:33.934 14:23:16 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:33.934 14:23:16 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:33.934 14:23:16 -- setup/hugepages.sh@89 -- # local node 00:03:33.934 14:23:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:33.934 14:23:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:33.934 14:23:16 -- setup/hugepages.sh@92 -- # local surp 00:03:33.934 14:23:16 -- setup/hugepages.sh@93 -- # local resv 00:03:33.934 14:23:16 -- setup/hugepages.sh@94 -- # local anon 00:03:33.934 14:23:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:33.934 14:23:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:33.934 14:23:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:33.934 14:23:16 -- setup/common.sh@18 -- # local node= 00:03:33.934 14:23:16 -- setup/common.sh@19 -- # local var val 00:03:33.934 14:23:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.934 14:23:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.934 14:23:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.934 14:23:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.934 14:23:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.934 14:23:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.934 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.934 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.934 14:23:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285420 kB' 'MemFree: 74810716 kB' 'MemAvailable: 78415772 kB' 'Buffers: 9752 kB' 'Cached: 11303260 kB' 'SwapCached: 0 kB' 'Active: 8115056 kB' 'Inactive: 3770068 kB' 'Active(anon): 7720648 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3770068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575444 kB' 'Mapped: 159336 kB' 'Shmem: 7148536 kB' 'KReclaimable: 187892 kB' 'Slab: 679784 kB' 'SReclaimable: 187892 kB' 'SUnreclaim: 491892 kB' 'KernelStack: 17728 kB' 'PageTables: 8392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958448 kB' 'Committed_AS: 8961508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212308 kB' 'VmallocChunk: 0 kB' 'Percpu: 47520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 529848 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 91226112 kB' 00:03:33.934 14:23:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.934 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.934 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.934 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.934 14:23:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.934 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.934 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.934 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.934 14:23:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.934 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.934 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.934 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.934 14:23:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.934 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.934 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.934 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.934 14:23:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.934 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.934 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.934 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.935 14:23:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.935 14:23:16 -- setup/common.sh@33 -- # echo 0 00:03:33.935 14:23:16 -- setup/common.sh@33 -- # return 0 00:03:33.935 14:23:16 -- setup/hugepages.sh@97 -- # anon=0 00:03:33.935 14:23:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:33.935 14:23:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.935 14:23:16 -- setup/common.sh@18 -- # local node= 00:03:33.935 14:23:16 -- setup/common.sh@19 -- # local var val 00:03:33.935 14:23:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.935 14:23:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.935 14:23:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.935 14:23:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.935 14:23:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.935 14:23:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.935 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285420 kB' 'MemFree: 74811352 kB' 'MemAvailable: 78416408 kB' 'Buffers: 9752 kB' 'Cached: 11303260 kB' 'SwapCached: 0 kB' 'Active: 8114904 kB' 'Inactive: 3770068 kB' 'Active(anon): 7720496 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3770068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575264 kB' 'Mapped: 159424 kB' 'Shmem: 7148536 kB' 'KReclaimable: 187892 kB' 'Slab: 679708 kB' 'SReclaimable: 187892 kB' 'SUnreclaim: 491816 kB' 'KernelStack: 17584 kB' 'PageTables: 7964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958448 kB' 'Committed_AS: 8961888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212212 kB' 'VmallocChunk: 0 kB' 'Percpu: 47520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 529848 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 91226112 kB' 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.936 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.936 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.937 14:23:16 -- setup/common.sh@33 -- # echo 0 00:03:33.937 14:23:16 -- setup/common.sh@33 -- # return 0 00:03:33.937 14:23:16 -- setup/hugepages.sh@99 -- # surp=0 00:03:33.937 14:23:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:33.937 14:23:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:33.937 14:23:16 -- setup/common.sh@18 -- # local node= 00:03:33.937 14:23:16 -- setup/common.sh@19 -- # local var val 00:03:33.937 14:23:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.937 14:23:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.937 14:23:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.937 14:23:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.937 14:23:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.937 14:23:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285420 kB' 'MemFree: 74811624 kB' 'MemAvailable: 78416680 kB' 'Buffers: 9752 kB' 'Cached: 11303272 kB' 'SwapCached: 0 kB' 'Active: 8114724 kB' 'Inactive: 3770068 kB' 'Active(anon): 7720316 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3770068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574576 kB' 'Mapped: 159424 kB' 'Shmem: 7148548 kB' 'KReclaimable: 187892 kB' 'Slab: 679708 kB' 'SReclaimable: 187892 kB' 'SUnreclaim: 491816 kB' 'KernelStack: 17568 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958448 kB' 'Committed_AS: 8961900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212212 kB' 'VmallocChunk: 0 kB' 'Percpu: 47520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 529848 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 91226112 kB' 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.937 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.937 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.938 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.938 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.939 14:23:16 -- setup/common.sh@33 -- # echo 0 00:03:33.939 14:23:16 -- setup/common.sh@33 -- # return 0 00:03:33.939 14:23:16 -- setup/hugepages.sh@100 -- # resv=0 00:03:33.939 14:23:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:33.939 nr_hugepages=1536 00:03:33.939 14:23:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:33.939 resv_hugepages=0 00:03:33.939 14:23:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:33.939 surplus_hugepages=0 00:03:33.939 14:23:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:33.939 anon_hugepages=0 00:03:33.939 14:23:16 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:33.939 14:23:16 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:33.939 14:23:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:33.939 14:23:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:33.939 14:23:16 -- setup/common.sh@18 -- # local node= 00:03:33.939 14:23:16 -- setup/common.sh@19 -- # local var val 00:03:33.939 14:23:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.939 14:23:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.939 14:23:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.939 14:23:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.939 14:23:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.939 14:23:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 14:23:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285420 kB' 'MemFree: 74811784 kB' 'MemAvailable: 78416840 kB' 'Buffers: 9752 kB' 'Cached: 11303276 kB' 'SwapCached: 0 kB' 'Active: 8115104 kB' 'Inactive: 3770068 kB' 'Active(anon): 7720696 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3770068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574960 kB' 'Mapped: 159424 kB' 'Shmem: 7148552 kB' 'KReclaimable: 187892 kB' 'Slab: 679708 kB' 'SReclaimable: 187892 kB' 'SUnreclaim: 491816 kB' 'KernelStack: 17584 kB' 'PageTables: 7964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958448 kB' 'Committed_AS: 8961916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212212 kB' 'VmallocChunk: 0 kB' 'Percpu: 47520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 529848 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 91226112 kB' 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.939 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.940 14:23:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.940 14:23:16 -- setup/common.sh@33 -- # echo 1536 00:03:33.940 14:23:16 -- setup/common.sh@33 -- # return 0 00:03:33.940 14:23:16 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:33.940 14:23:16 -- setup/hugepages.sh@112 -- # get_nodes 00:03:33.940 14:23:16 -- setup/hugepages.sh@27 -- # local node 00:03:33.940 14:23:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.940 14:23:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:33.940 14:23:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.940 14:23:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:33.940 14:23:16 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:33.940 14:23:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:33.940 14:23:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.940 14:23:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.940 14:23:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:33.940 14:23:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.940 14:23:16 -- setup/common.sh@18 -- # local node=0 00:03:33.940 14:23:16 -- setup/common.sh@19 -- # local var val 00:03:33.940 14:23:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.940 14:23:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.940 14:23:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:33.940 14:23:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:33.940 14:23:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.940 14:23:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.940 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 42913652 kB' 'MemUsed: 5151212 kB' 'SwapCached: 0 kB' 'Active: 2489964 kB' 'Inactive: 119896 kB' 'Active(anon): 2243196 kB' 'Inactive(anon): 0 kB' 'Active(file): 246768 kB' 'Inactive(file): 119896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2388300 kB' 'Mapped: 67320 kB' 'AnonPages: 224680 kB' 'Shmem: 2021636 kB' 'KernelStack: 10968 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101980 kB' 'Slab: 356496 kB' 'SReclaimable: 101980 kB' 'SUnreclaim: 254516 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.941 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.941 14:23:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.941 14:23:16 -- setup/common.sh@33 -- # echo 0 00:03:33.941 14:23:16 -- setup/common.sh@33 -- # return 0 00:03:33.941 14:23:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.941 14:23:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.941 14:23:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.941 14:23:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:33.941 14:23:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.941 14:23:16 -- setup/common.sh@18 -- # local node=1 00:03:33.941 14:23:16 -- setup/common.sh@19 -- # local var val 00:03:33.941 14:23:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.941 14:23:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.941 14:23:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:33.941 14:23:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:33.941 14:23:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.942 14:23:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220556 kB' 'MemFree: 31898348 kB' 'MemUsed: 12322208 kB' 'SwapCached: 0 kB' 'Active: 5625168 kB' 'Inactive: 3650172 kB' 'Active(anon): 5477528 kB' 'Inactive(anon): 0 kB' 'Active(file): 147640 kB' 'Inactive(file): 3650172 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8924756 kB' 'Mapped: 92104 kB' 'AnonPages: 350288 kB' 'Shmem: 5126944 kB' 'KernelStack: 6616 kB' 'PageTables: 3836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85912 kB' 'Slab: 323212 kB' 'SReclaimable: 85912 kB' 'SUnreclaim: 237300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # continue 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.942 14:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.942 14:23:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.943 14:23:16 -- setup/common.sh@33 -- # echo 0 00:03:33.943 14:23:16 -- setup/common.sh@33 -- # return 0 00:03:33.943 14:23:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.943 14:23:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.943 14:23:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.943 14:23:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.943 14:23:16 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:33.943 node0=512 expecting 512 00:03:33.943 14:23:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.943 14:23:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.943 14:23:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.943 14:23:16 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:33.943 node1=1024 expecting 1024 00:03:33.943 14:23:16 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:33.943 00:03:33.943 real 0m5.966s 00:03:33.943 user 0m2.125s 00:03:33.943 sys 0m3.910s 00:03:33.943 14:23:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.943 14:23:16 -- common/autotest_common.sh@10 -- # set +x 00:03:33.943 ************************************ 00:03:33.943 END TEST custom_alloc 00:03:33.943 ************************************ 00:03:33.943 14:23:16 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:33.943 14:23:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:33.943 14:23:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:33.943 14:23:16 -- common/autotest_common.sh@10 -- # set +x 00:03:33.943 ************************************ 00:03:33.943 START TEST no_shrink_alloc 00:03:33.943 ************************************ 00:03:33.943 14:23:16 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:03:33.943 14:23:16 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:33.943 14:23:16 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:33.943 14:23:16 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:33.943 14:23:16 -- setup/hugepages.sh@51 -- # shift 00:03:33.943 14:23:16 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:33.943 14:23:16 -- setup/hugepages.sh@52 -- # local node_ids 00:03:33.943 14:23:16 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:33.943 14:23:16 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:33.943 14:23:16 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:33.943 14:23:16 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:33.943 14:23:16 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:33.943 14:23:16 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:33.943 14:23:16 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:33.943 14:23:16 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:33.943 14:23:16 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:33.943 14:23:16 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:33.943 14:23:16 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:33.943 14:23:16 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:33.943 14:23:16 -- setup/hugepages.sh@73 -- # return 0 00:03:33.943 14:23:16 -- setup/hugepages.sh@198 -- # setup output 00:03:33.943 14:23:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.943 14:23:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:38.221 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:38.221 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:38.221 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:38.221 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:38.221 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:38.221 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:38.221 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:38.221 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:38.221 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:38.221 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:38.221 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:38.221 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:38.221 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:38.221 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:38.221 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:38.221 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:38.221 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:40.129 14:23:22 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:40.129 14:23:22 -- setup/hugepages.sh@89 -- # local node 00:03:40.129 14:23:22 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:40.129 14:23:22 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:40.129 14:23:22 -- setup/hugepages.sh@92 -- # local surp 00:03:40.129 14:23:22 -- setup/hugepages.sh@93 -- # local resv 00:03:40.129 14:23:22 -- setup/hugepages.sh@94 -- # local anon 00:03:40.129 14:23:22 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:40.129 14:23:22 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:40.129 14:23:22 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:40.129 14:23:22 -- setup/common.sh@18 -- # local node= 00:03:40.129 14:23:22 -- setup/common.sh@19 -- # local var val 00:03:40.129 14:23:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.129 14:23:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.129 14:23:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.129 14:23:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.129 14:23:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.129 14:23:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.129 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.129 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285420 kB' 'MemFree: 75855372 kB' 'MemAvailable: 79460312 kB' 'Buffers: 9752 kB' 'Cached: 11303412 kB' 'SwapCached: 0 kB' 'Active: 8115624 kB' 'Inactive: 3770068 kB' 'Active(anon): 7721216 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3770068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575464 kB' 'Mapped: 158544 kB' 'Shmem: 7148688 kB' 'KReclaimable: 187660 kB' 'Slab: 678396 kB' 'SReclaimable: 187660 kB' 'SUnreclaim: 490736 kB' 'KernelStack: 17456 kB' 'PageTables: 7712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482736 kB' 'Committed_AS: 8928300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212036 kB' 'VmallocChunk: 0 kB' 'Percpu: 47520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 529848 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 91226112 kB' 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.130 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.130 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.131 14:23:22 -- setup/common.sh@33 -- # echo 0 00:03:40.131 14:23:22 -- setup/common.sh@33 -- # return 0 00:03:40.131 14:23:22 -- setup/hugepages.sh@97 -- # anon=0 00:03:40.131 14:23:22 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:40.131 14:23:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.131 14:23:22 -- setup/common.sh@18 -- # local node= 00:03:40.131 14:23:22 -- setup/common.sh@19 -- # local var val 00:03:40.131 14:23:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.131 14:23:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.131 14:23:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.131 14:23:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.131 14:23:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.131 14:23:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.131 14:23:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285420 kB' 'MemFree: 75857320 kB' 'MemAvailable: 79462260 kB' 'Buffers: 9752 kB' 'Cached: 11303412 kB' 'SwapCached: 0 kB' 'Active: 8114704 kB' 'Inactive: 3770068 kB' 'Active(anon): 7720296 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3770068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574984 kB' 'Mapped: 158460 kB' 'Shmem: 7148688 kB' 'KReclaimable: 187660 kB' 'Slab: 678380 kB' 'SReclaimable: 187660 kB' 'SUnreclaim: 490720 kB' 'KernelStack: 17440 kB' 'PageTables: 7640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482736 kB' 'Committed_AS: 8928312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212004 kB' 'VmallocChunk: 0 kB' 'Percpu: 47520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 529848 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 91226112 kB' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:23:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.132 14:23:22 -- setup/common.sh@33 -- # echo 0 00:03:40.132 14:23:22 -- setup/common.sh@33 -- # return 0 00:03:40.132 14:23:22 -- setup/hugepages.sh@99 -- # surp=0 00:03:40.132 14:23:22 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:40.132 14:23:22 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:40.132 14:23:22 -- setup/common.sh@18 -- # local node= 00:03:40.132 14:23:22 -- setup/common.sh@19 -- # local var val 00:03:40.132 14:23:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.132 14:23:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.132 14:23:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.132 14:23:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.132 14:23:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.132 14:23:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:23:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285420 kB' 'MemFree: 75861100 kB' 'MemAvailable: 79466040 kB' 'Buffers: 9752 kB' 'Cached: 11303424 kB' 'SwapCached: 0 kB' 'Active: 8114572 kB' 'Inactive: 3770068 kB' 'Active(anon): 7720164 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3770068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574828 kB' 'Mapped: 158460 kB' 'Shmem: 7148700 kB' 'KReclaimable: 187660 kB' 'Slab: 678380 kB' 'SReclaimable: 187660 kB' 'SUnreclaim: 490720 kB' 'KernelStack: 17424 kB' 'PageTables: 7592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482736 kB' 'Committed_AS: 8928324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212004 kB' 'VmallocChunk: 0 kB' 'Percpu: 47520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 529848 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 91226112 kB' 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.132 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.133 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.133 14:23:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.134 14:23:22 -- setup/common.sh@33 -- # echo 0 00:03:40.134 14:23:22 -- setup/common.sh@33 -- # return 0 00:03:40.134 14:23:22 -- setup/hugepages.sh@100 -- # resv=0 00:03:40.134 14:23:22 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:40.134 nr_hugepages=1024 00:03:40.134 14:23:22 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:40.134 resv_hugepages=0 00:03:40.134 14:23:22 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:40.134 surplus_hugepages=0 00:03:40.134 14:23:22 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:40.134 anon_hugepages=0 00:03:40.134 14:23:22 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.134 14:23:22 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:40.134 14:23:22 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:40.134 14:23:22 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:40.134 14:23:22 -- setup/common.sh@18 -- # local node= 00:03:40.134 14:23:22 -- setup/common.sh@19 -- # local var val 00:03:40.134 14:23:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.134 14:23:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.134 14:23:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.134 14:23:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.134 14:23:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.134 14:23:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.134 14:23:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285420 kB' 'MemFree: 75861336 kB' 'MemAvailable: 79466276 kB' 'Buffers: 9752 kB' 'Cached: 11303440 kB' 'SwapCached: 0 kB' 'Active: 8115416 kB' 'Inactive: 3770068 kB' 'Active(anon): 7721008 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3770068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575668 kB' 'Mapped: 158964 kB' 'Shmem: 7148716 kB' 'KReclaimable: 187660 kB' 'Slab: 678364 kB' 'SReclaimable: 187660 kB' 'SUnreclaim: 490704 kB' 'KernelStack: 17440 kB' 'PageTables: 7644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482736 kB' 'Committed_AS: 8929564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212020 kB' 'VmallocChunk: 0 kB' 'Percpu: 47520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 529848 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 91226112 kB' 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.134 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.134 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.135 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.135 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.136 14:23:22 -- setup/common.sh@33 -- # echo 1024 00:03:40.136 14:23:22 -- setup/common.sh@33 -- # return 0 00:03:40.136 14:23:22 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.136 14:23:22 -- setup/hugepages.sh@112 -- # get_nodes 00:03:40.136 14:23:22 -- setup/hugepages.sh@27 -- # local node 00:03:40.136 14:23:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.136 14:23:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:40.136 14:23:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.136 14:23:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:40.136 14:23:22 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:40.136 14:23:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.136 14:23:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.136 14:23:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.136 14:23:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:40.136 14:23:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.136 14:23:22 -- setup/common.sh@18 -- # local node=0 00:03:40.136 14:23:22 -- setup/common.sh@19 -- # local var val 00:03:40.136 14:23:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.136 14:23:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.136 14:23:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:40.136 14:23:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:40.136 14:23:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.136 14:23:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 41872956 kB' 'MemUsed: 6191908 kB' 'SwapCached: 0 kB' 'Active: 2490656 kB' 'Inactive: 119896 kB' 'Active(anon): 2243888 kB' 'Inactive(anon): 0 kB' 'Active(file): 246768 kB' 'Inactive(file): 119896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2388324 kB' 'Mapped: 67140 kB' 'AnonPages: 225484 kB' 'Shmem: 2021660 kB' 'KernelStack: 11000 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101836 kB' 'Slab: 356084 kB' 'SReclaimable: 101836 kB' 'SUnreclaim: 254248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.136 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.136 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.137 14:23:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.137 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.137 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.137 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.137 14:23:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.137 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.137 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.137 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.137 14:23:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.137 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.137 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.137 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.137 14:23:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.137 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.137 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.137 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.137 14:23:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.137 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.137 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.137 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.137 14:23:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.137 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.137 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.137 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.137 14:23:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.137 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.137 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.137 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.137 14:23:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.137 14:23:22 -- setup/common.sh@32 -- # continue 00:03:40.137 14:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.137 14:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.137 14:23:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.137 14:23:22 -- setup/common.sh@33 -- # echo 0 00:03:40.137 14:23:22 -- setup/common.sh@33 -- # return 0 00:03:40.137 14:23:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.137 14:23:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.137 14:23:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.137 14:23:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.137 14:23:22 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:40.137 node0=1024 expecting 1024 00:03:40.137 14:23:22 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:40.137 14:23:22 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:40.137 14:23:22 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:40.137 14:23:22 -- setup/hugepages.sh@202 -- # setup output 00:03:40.137 14:23:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.137 14:23:22 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:43.426 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:43.426 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:43.426 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:43.426 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:43.685 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:43.685 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:43.685 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:43.685 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:43.685 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:43.685 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:43.685 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:43.686 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:43.686 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:43.686 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:43.686 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:43.686 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:43.686 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:45.606 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:45.870 14:23:28 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:45.870 14:23:28 -- setup/hugepages.sh@89 -- # local node 00:03:45.870 14:23:28 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.870 14:23:28 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.870 14:23:28 -- setup/hugepages.sh@92 -- # local surp 00:03:45.870 14:23:28 -- setup/hugepages.sh@93 -- # local resv 00:03:45.870 14:23:28 -- setup/hugepages.sh@94 -- # local anon 00:03:45.870 14:23:28 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.870 14:23:28 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.870 14:23:28 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.870 14:23:28 -- setup/common.sh@18 -- # local node= 00:03:45.870 14:23:28 -- setup/common.sh@19 -- # local var val 00:03:45.870 14:23:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.870 14:23:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.870 14:23:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.870 14:23:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.870 14:23:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.870 14:23:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.870 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.870 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285420 kB' 'MemFree: 75830552 kB' 'MemAvailable: 79435508 kB' 'Buffers: 9752 kB' 'Cached: 11303564 kB' 'SwapCached: 0 kB' 'Active: 8116936 kB' 'Inactive: 3770068 kB' 'Active(anon): 7722528 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3770068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576944 kB' 'Mapped: 158636 kB' 'Shmem: 7148840 kB' 'KReclaimable: 187692 kB' 'Slab: 678280 kB' 'SReclaimable: 187692 kB' 'SUnreclaim: 490588 kB' 'KernelStack: 17712 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482736 kB' 'Committed_AS: 8933360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212276 kB' 'VmallocChunk: 0 kB' 'Percpu: 47520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 529848 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 91226112 kB' 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.871 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.871 14:23:28 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.872 14:23:28 -- setup/common.sh@33 -- # echo 0 00:03:45.872 14:23:28 -- setup/common.sh@33 -- # return 0 00:03:45.872 14:23:28 -- setup/hugepages.sh@97 -- # anon=0 00:03:45.872 14:23:28 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.872 14:23:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.872 14:23:28 -- setup/common.sh@18 -- # local node= 00:03:45.872 14:23:28 -- setup/common.sh@19 -- # local var val 00:03:45.872 14:23:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.872 14:23:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.872 14:23:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.872 14:23:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.872 14:23:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.872 14:23:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.872 14:23:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285420 kB' 'MemFree: 75833004 kB' 'MemAvailable: 79437960 kB' 'Buffers: 9752 kB' 'Cached: 11303564 kB' 'SwapCached: 0 kB' 'Active: 8116832 kB' 'Inactive: 3770068 kB' 'Active(anon): 7722424 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3770068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576872 kB' 'Mapped: 158520 kB' 'Shmem: 7148840 kB' 'KReclaimable: 187692 kB' 'Slab: 678200 kB' 'SReclaimable: 187692 kB' 'SUnreclaim: 490508 kB' 'KernelStack: 17536 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482736 kB' 'Committed_AS: 8933372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212260 kB' 'VmallocChunk: 0 kB' 'Percpu: 47520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 529848 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 91226112 kB' 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.872 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.872 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.873 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.873 14:23:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.874 14:23:28 -- setup/common.sh@33 -- # echo 0 00:03:45.874 14:23:28 -- setup/common.sh@33 -- # return 0 00:03:45.874 14:23:28 -- setup/hugepages.sh@99 -- # surp=0 00:03:45.874 14:23:28 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.874 14:23:28 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.874 14:23:28 -- setup/common.sh@18 -- # local node= 00:03:45.874 14:23:28 -- setup/common.sh@19 -- # local var val 00:03:45.874 14:23:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.874 14:23:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.874 14:23:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.874 14:23:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.874 14:23:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.874 14:23:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.874 14:23:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285420 kB' 'MemFree: 75829984 kB' 'MemAvailable: 79434940 kB' 'Buffers: 9752 kB' 'Cached: 11303564 kB' 'SwapCached: 0 kB' 'Active: 8117040 kB' 'Inactive: 3770068 kB' 'Active(anon): 7722632 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3770068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577144 kB' 'Mapped: 158524 kB' 'Shmem: 7148840 kB' 'KReclaimable: 187692 kB' 'Slab: 678168 kB' 'SReclaimable: 187692 kB' 'SUnreclaim: 490476 kB' 'KernelStack: 17536 kB' 'PageTables: 7940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482736 kB' 'Committed_AS: 8933384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212324 kB' 'VmallocChunk: 0 kB' 'Percpu: 47520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 529848 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 91226112 kB' 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.874 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.874 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.875 14:23:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.875 14:23:28 -- setup/common.sh@33 -- # echo 0 00:03:45.875 14:23:28 -- setup/common.sh@33 -- # return 0 00:03:45.875 14:23:28 -- setup/hugepages.sh@100 -- # resv=0 00:03:45.875 14:23:28 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:45.875 nr_hugepages=1024 00:03:45.875 14:23:28 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.875 resv_hugepages=0 00:03:45.875 14:23:28 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.875 surplus_hugepages=0 00:03:45.875 14:23:28 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.875 anon_hugepages=0 00:03:45.875 14:23:28 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.875 14:23:28 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:45.875 14:23:28 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.875 14:23:28 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.875 14:23:28 -- setup/common.sh@18 -- # local node= 00:03:45.875 14:23:28 -- setup/common.sh@19 -- # local var val 00:03:45.875 14:23:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.875 14:23:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.875 14:23:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.875 14:23:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.875 14:23:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.875 14:23:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.875 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.876 14:23:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285420 kB' 'MemFree: 75830736 kB' 'MemAvailable: 79435692 kB' 'Buffers: 9752 kB' 'Cached: 11303580 kB' 'SwapCached: 0 kB' 'Active: 8116852 kB' 'Inactive: 3770068 kB' 'Active(anon): 7722444 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3770068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576888 kB' 'Mapped: 158524 kB' 'Shmem: 7148856 kB' 'KReclaimable: 187692 kB' 'Slab: 678168 kB' 'SReclaimable: 187692 kB' 'SUnreclaim: 490476 kB' 'KernelStack: 17568 kB' 'PageTables: 7892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482736 kB' 'Committed_AS: 8931900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212260 kB' 'VmallocChunk: 0 kB' 'Percpu: 47520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 529848 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 91226112 kB' 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.876 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.876 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.877 14:23:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.877 14:23:28 -- setup/common.sh@33 -- # echo 1024 00:03:45.877 14:23:28 -- setup/common.sh@33 -- # return 0 00:03:45.877 14:23:28 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.877 14:23:28 -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.877 14:23:28 -- setup/hugepages.sh@27 -- # local node 00:03:45.877 14:23:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.877 14:23:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:45.877 14:23:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.877 14:23:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:45.877 14:23:28 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:45.877 14:23:28 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.877 14:23:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.877 14:23:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.877 14:23:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.877 14:23:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.877 14:23:28 -- setup/common.sh@18 -- # local node=0 00:03:45.877 14:23:28 -- setup/common.sh@19 -- # local var val 00:03:45.877 14:23:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.877 14:23:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.877 14:23:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.877 14:23:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.877 14:23:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.877 14:23:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.877 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.878 14:23:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 41858100 kB' 'MemUsed: 6206764 kB' 'SwapCached: 0 kB' 'Active: 2491564 kB' 'Inactive: 119896 kB' 'Active(anon): 2244796 kB' 'Inactive(anon): 0 kB' 'Active(file): 246768 kB' 'Inactive(file): 119896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2388388 kB' 'Mapped: 67208 kB' 'AnonPages: 226188 kB' 'Shmem: 2021724 kB' 'KernelStack: 10952 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101836 kB' 'Slab: 355876 kB' 'SReclaimable: 101836 kB' 'SUnreclaim: 254040 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.878 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.878 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.879 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.879 14:23:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.879 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.879 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.879 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.879 14:23:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.879 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.879 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.879 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.879 14:23:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.879 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.879 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.879 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.879 14:23:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.879 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.879 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.879 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.879 14:23:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.879 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.879 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.879 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.879 14:23:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.879 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.879 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.879 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.879 14:23:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.879 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.879 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.879 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.879 14:23:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.879 14:23:28 -- setup/common.sh@32 -- # continue 00:03:45.879 14:23:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.879 14:23:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.879 14:23:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.879 14:23:28 -- setup/common.sh@33 -- # echo 0 00:03:45.879 14:23:28 -- setup/common.sh@33 -- # return 0 00:03:45.879 14:23:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.879 14:23:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.879 14:23:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.879 14:23:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.879 14:23:28 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:45.879 node0=1024 expecting 1024 00:03:45.879 14:23:28 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:45.879 00:03:45.879 real 0m11.909s 00:03:45.879 user 0m4.225s 00:03:45.879 sys 0m7.797s 00:03:45.879 14:23:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.879 14:23:28 -- common/autotest_common.sh@10 -- # set +x 00:03:45.879 ************************************ 00:03:45.879 END TEST no_shrink_alloc 00:03:45.879 ************************************ 00:03:45.879 14:23:28 -- setup/hugepages.sh@217 -- # clear_hp 00:03:45.879 14:23:28 -- setup/hugepages.sh@37 -- # local node hp 00:03:45.879 14:23:28 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:45.879 14:23:28 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.879 14:23:28 -- setup/hugepages.sh@41 -- # echo 0 00:03:45.879 14:23:28 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.879 14:23:28 -- setup/hugepages.sh@41 -- # echo 0 00:03:45.879 14:23:28 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:45.879 14:23:28 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.879 14:23:28 -- setup/hugepages.sh@41 -- # echo 0 00:03:45.879 14:23:28 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.879 14:23:28 -- setup/hugepages.sh@41 -- # echo 0 00:03:45.879 14:23:28 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:45.879 14:23:28 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:45.879 00:03:45.879 real 0m45.540s 00:03:45.879 user 0m15.422s 00:03:45.879 sys 0m27.473s 00:03:45.879 14:23:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.879 14:23:28 -- common/autotest_common.sh@10 -- # set +x 00:03:45.879 ************************************ 00:03:45.879 END TEST hugepages 00:03:45.879 ************************************ 00:03:46.139 14:23:28 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:46.139 14:23:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:46.139 14:23:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:46.139 14:23:28 -- common/autotest_common.sh@10 -- # set +x 00:03:46.139 ************************************ 00:03:46.139 START TEST driver 00:03:46.139 ************************************ 00:03:46.139 14:23:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:46.139 * Looking for test storage... 00:03:46.139 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:46.139 14:23:28 -- setup/driver.sh@68 -- # setup reset 00:03:46.139 14:23:28 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:46.139 14:23:28 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:54.262 14:23:35 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:54.262 14:23:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:54.262 14:23:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:54.262 14:23:35 -- common/autotest_common.sh@10 -- # set +x 00:03:54.262 ************************************ 00:03:54.262 START TEST guess_driver 00:03:54.262 ************************************ 00:03:54.262 14:23:35 -- common/autotest_common.sh@1104 -- # guess_driver 00:03:54.262 14:23:35 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:54.262 14:23:35 -- setup/driver.sh@47 -- # local fail=0 00:03:54.262 14:23:35 -- setup/driver.sh@49 -- # pick_driver 00:03:54.262 14:23:35 -- setup/driver.sh@36 -- # vfio 00:03:54.262 14:23:35 -- setup/driver.sh@21 -- # local iommu_grups 00:03:54.262 14:23:35 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:54.262 14:23:35 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:54.262 14:23:35 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:54.262 14:23:35 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:54.262 14:23:35 -- setup/driver.sh@29 -- # (( 190 > 0 )) 00:03:54.262 14:23:35 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:54.262 14:23:35 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:54.262 14:23:35 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:54.262 14:23:35 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:54.262 14:23:35 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:54.262 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:54.262 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:54.262 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:54.262 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:54.262 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:54.262 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:54.262 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:54.262 14:23:35 -- setup/driver.sh@30 -- # return 0 00:03:54.262 14:23:35 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:54.262 14:23:35 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:54.262 14:23:35 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:54.262 14:23:35 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:54.262 Looking for driver=vfio-pci 00:03:54.262 14:23:35 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.262 14:23:35 -- setup/driver.sh@45 -- # setup output config 00:03:54.262 14:23:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.262 14:23:35 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:57.552 14:23:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.552 14:23:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.552 14:23:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.552 14:23:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.552 14:23:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.552 14:23:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.552 14:23:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.552 14:23:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.552 14:23:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.552 14:23:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.552 14:23:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.552 14:23:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.552 14:23:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.552 14:23:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.552 14:23:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.552 14:23:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.552 14:23:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.552 14:23:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.552 14:23:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.552 14:23:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.552 14:23:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.552 14:23:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.552 14:23:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.552 14:23:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.552 14:23:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.552 14:23:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.552 14:23:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.552 14:23:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.552 14:23:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.552 14:23:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.552 14:23:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.552 14:23:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.552 14:23:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.552 14:23:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.552 14:23:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.552 14:23:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.552 14:23:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.552 14:23:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.552 14:23:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.552 14:23:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.552 14:23:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.552 14:23:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.552 14:23:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.552 14:23:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.552 14:23:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.552 14:23:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.552 14:23:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:57.552 14:23:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.842 14:23:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.843 14:23:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.843 14:23:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.746 14:23:44 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:02.746 14:23:44 -- setup/driver.sh@65 -- # setup reset 00:04:02.746 14:23:44 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:02.746 14:23:44 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:10.872 00:04:10.872 real 0m16.419s 00:04:10.872 user 0m4.288s 00:04:10.872 sys 0m8.174s 00:04:10.872 14:23:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.872 14:23:52 -- common/autotest_common.sh@10 -- # set +x 00:04:10.872 ************************************ 00:04:10.872 END TEST guess_driver 00:04:10.872 ************************************ 00:04:10.872 00:04:10.872 real 0m23.748s 00:04:10.872 user 0m6.448s 00:04:10.872 sys 0m12.553s 00:04:10.872 14:23:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.872 14:23:52 -- common/autotest_common.sh@10 -- # set +x 00:04:10.872 ************************************ 00:04:10.872 END TEST driver 00:04:10.872 ************************************ 00:04:10.872 14:23:52 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:10.872 14:23:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:10.872 14:23:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:10.872 14:23:52 -- common/autotest_common.sh@10 -- # set +x 00:04:10.872 ************************************ 00:04:10.872 START TEST devices 00:04:10.872 ************************************ 00:04:10.872 14:23:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:10.872 * Looking for test storage... 00:04:10.872 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:10.872 14:23:52 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:10.872 14:23:52 -- setup/devices.sh@192 -- # setup reset 00:04:10.872 14:23:52 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:10.872 14:23:52 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:16.148 14:23:58 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:16.148 14:23:58 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:16.148 14:23:58 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:16.148 14:23:58 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:16.148 14:23:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:16.148 14:23:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:16.148 14:23:58 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:16.148 14:23:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:16.148 14:23:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:16.148 14:23:58 -- setup/devices.sh@196 -- # blocks=() 00:04:16.148 14:23:58 -- setup/devices.sh@196 -- # declare -a blocks 00:04:16.148 14:23:58 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:16.148 14:23:58 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:16.148 14:23:58 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:16.148 14:23:58 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:16.148 14:23:58 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:16.148 14:23:58 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:16.148 14:23:58 -- setup/devices.sh@202 -- # pci=0000:1a:00.0 00:04:16.148 14:23:58 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\1\a\:\0\0\.\0* ]] 00:04:16.148 14:23:58 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:16.148 14:23:58 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:16.148 14:23:58 -- scripts/common.sh@389 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:16.148 No valid GPT data, bailing 00:04:16.148 14:23:58 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:16.148 14:23:58 -- scripts/common.sh@393 -- # pt= 00:04:16.148 14:23:58 -- scripts/common.sh@394 -- # return 1 00:04:16.148 14:23:58 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:16.148 14:23:58 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:16.148 14:23:58 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:16.148 14:23:58 -- setup/common.sh@80 -- # echo 4000787030016 00:04:16.148 14:23:58 -- setup/devices.sh@204 -- # (( 4000787030016 >= min_disk_size )) 00:04:16.148 14:23:58 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:16.148 14:23:58 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:1a:00.0 00:04:16.148 14:23:58 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:16.148 14:23:58 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:16.148 14:23:58 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:16.148 14:23:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:16.148 14:23:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:16.148 14:23:58 -- common/autotest_common.sh@10 -- # set +x 00:04:16.148 ************************************ 00:04:16.148 START TEST nvme_mount 00:04:16.148 ************************************ 00:04:16.148 14:23:58 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:16.148 14:23:58 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:16.148 14:23:58 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:16.148 14:23:58 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.148 14:23:58 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:16.148 14:23:58 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:16.148 14:23:58 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:16.148 14:23:58 -- setup/common.sh@40 -- # local part_no=1 00:04:16.148 14:23:58 -- setup/common.sh@41 -- # local size=1073741824 00:04:16.148 14:23:58 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:16.148 14:23:58 -- setup/common.sh@44 -- # parts=() 00:04:16.148 14:23:58 -- setup/common.sh@44 -- # local parts 00:04:16.148 14:23:58 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:16.148 14:23:58 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.148 14:23:58 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:16.148 14:23:58 -- setup/common.sh@46 -- # (( part++ )) 00:04:16.148 14:23:58 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.148 14:23:58 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:16.148 14:23:58 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:16.148 14:23:58 -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:17.525 Creating new GPT entries in memory. 00:04:17.525 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:17.525 other utilities. 00:04:17.525 14:23:59 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:17.525 14:23:59 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:17.525 14:23:59 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:17.525 14:23:59 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:17.525 14:23:59 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:18.463 Creating new GPT entries in memory. 00:04:18.463 The operation has completed successfully. 00:04:18.463 14:24:00 -- setup/common.sh@57 -- # (( part++ )) 00:04:18.463 14:24:00 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:18.463 14:24:00 -- setup/common.sh@62 -- # wait 663326 00:04:18.463 14:24:00 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.463 14:24:00 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:18.463 14:24:00 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.463 14:24:00 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:18.463 14:24:00 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:18.463 14:24:00 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.463 14:24:00 -- setup/devices.sh@105 -- # verify 0000:1a:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.463 14:24:00 -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:04:18.463 14:24:00 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:18.463 14:24:00 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.463 14:24:00 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.463 14:24:00 -- setup/devices.sh@53 -- # local found=0 00:04:18.463 14:24:00 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:18.463 14:24:00 -- setup/devices.sh@56 -- # : 00:04:18.463 14:24:00 -- setup/devices.sh@59 -- # local pci status 00:04:18.463 14:24:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.463 14:24:00 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:04:18.463 14:24:00 -- setup/devices.sh@47 -- # setup output config 00:04:18.464 14:24:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.464 14:24:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:21.754 14:24:04 -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:21.754 14:24:04 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:21.754 14:24:04 -- setup/devices.sh@63 -- # found=1 00:04:21.754 14:24:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.754 14:24:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:21.754 14:24:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.754 14:24:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:21.754 14:24:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.754 14:24:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:21.754 14:24:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.754 14:24:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:21.754 14:24:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.012 14:24:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:22.012 14:24:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.012 14:24:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:22.012 14:24:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.012 14:24:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:22.012 14:24:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.012 14:24:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:22.012 14:24:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.012 14:24:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:22.012 14:24:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.012 14:24:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:22.012 14:24:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.012 14:24:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:22.012 14:24:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.012 14:24:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:22.012 14:24:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.012 14:24:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:22.012 14:24:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.012 14:24:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:22.012 14:24:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.012 14:24:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:22.013 14:24:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.013 14:24:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:22.013 14:24:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.551 14:24:06 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.551 14:24:06 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:24.551 14:24:06 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.551 14:24:06 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:24.551 14:24:06 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.551 14:24:06 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:24.551 14:24:06 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.551 14:24:06 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.551 14:24:06 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.551 14:24:06 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:24.551 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:24.551 14:24:06 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:24.551 14:24:06 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:24.551 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:24.551 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:24.551 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:24.551 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:24.552 14:24:06 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:24.552 14:24:06 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:24.552 14:24:06 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.552 14:24:06 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:24.552 14:24:06 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:24.552 14:24:06 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.552 14:24:06 -- setup/devices.sh@116 -- # verify 0000:1a:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.552 14:24:06 -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:04:24.552 14:24:06 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:24.552 14:24:06 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.552 14:24:06 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.552 14:24:06 -- setup/devices.sh@53 -- # local found=0 00:04:24.552 14:24:06 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:24.552 14:24:06 -- setup/devices.sh@56 -- # : 00:04:24.552 14:24:06 -- setup/devices.sh@59 -- # local pci status 00:04:24.552 14:24:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.552 14:24:06 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:04:24.552 14:24:06 -- setup/devices.sh@47 -- # setup output config 00:04:24.552 14:24:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.552 14:24:06 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:27.934 14:24:10 -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:27.934 14:24:10 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:27.934 14:24:10 -- setup/devices.sh@63 -- # found=1 00:04:27.934 14:24:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.934 14:24:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:27.934 14:24:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.934 14:24:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:27.934 14:24:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.934 14:24:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:27.934 14:24:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.934 14:24:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:27.934 14:24:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.934 14:24:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:27.934 14:24:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.934 14:24:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:27.934 14:24:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.934 14:24:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:27.934 14:24:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.934 14:24:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:27.934 14:24:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.934 14:24:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:27.934 14:24:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.934 14:24:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:27.934 14:24:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.934 14:24:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:27.934 14:24:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.934 14:24:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:27.934 14:24:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.934 14:24:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:27.934 14:24:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.934 14:24:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:27.934 14:24:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.934 14:24:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:27.934 14:24:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.934 14:24:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:27.934 14:24:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.472 14:24:12 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:30.472 14:24:12 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:30.472 14:24:12 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.472 14:24:12 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:30.472 14:24:12 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.472 14:24:12 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.472 14:24:12 -- setup/devices.sh@125 -- # verify 0000:1a:00.0 data@nvme0n1 '' '' 00:04:30.472 14:24:12 -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:04:30.472 14:24:12 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:30.472 14:24:12 -- setup/devices.sh@50 -- # local mount_point= 00:04:30.472 14:24:12 -- setup/devices.sh@51 -- # local test_file= 00:04:30.472 14:24:12 -- setup/devices.sh@53 -- # local found=0 00:04:30.472 14:24:12 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:30.472 14:24:12 -- setup/devices.sh@59 -- # local pci status 00:04:30.472 14:24:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.472 14:24:12 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:04:30.472 14:24:12 -- setup/devices.sh@47 -- # setup output config 00:04:30.472 14:24:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.472 14:24:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:33.767 14:24:16 -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:33.767 14:24:16 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:33.767 14:24:16 -- setup/devices.sh@63 -- # found=1 00:04:33.767 14:24:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.767 14:24:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:33.767 14:24:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.767 14:24:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:33.767 14:24:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.767 14:24:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:33.767 14:24:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.767 14:24:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:33.767 14:24:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.767 14:24:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:33.767 14:24:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.768 14:24:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:33.768 14:24:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.768 14:24:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:33.768 14:24:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.768 14:24:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:33.768 14:24:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.768 14:24:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:33.768 14:24:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.768 14:24:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:33.768 14:24:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.768 14:24:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:33.768 14:24:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.768 14:24:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:33.768 14:24:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.768 14:24:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:33.768 14:24:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.768 14:24:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:33.768 14:24:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.768 14:24:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:33.768 14:24:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.768 14:24:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:33.768 14:24:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.307 14:24:18 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:36.307 14:24:18 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:36.307 14:24:18 -- setup/devices.sh@68 -- # return 0 00:04:36.307 14:24:18 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:36.307 14:24:18 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.307 14:24:18 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:36.307 14:24:18 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:36.307 14:24:18 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:36.307 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:36.307 00:04:36.307 real 0m19.893s 00:04:36.307 user 0m5.910s 00:04:36.307 sys 0m11.802s 00:04:36.307 14:24:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.307 14:24:18 -- common/autotest_common.sh@10 -- # set +x 00:04:36.307 ************************************ 00:04:36.307 END TEST nvme_mount 00:04:36.307 ************************************ 00:04:36.307 14:24:18 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:36.307 14:24:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:36.307 14:24:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:36.307 14:24:18 -- common/autotest_common.sh@10 -- # set +x 00:04:36.307 ************************************ 00:04:36.307 START TEST dm_mount 00:04:36.307 ************************************ 00:04:36.307 14:24:18 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:36.307 14:24:18 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:36.307 14:24:18 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:36.307 14:24:18 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:36.307 14:24:18 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:36.307 14:24:18 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:36.307 14:24:18 -- setup/common.sh@40 -- # local part_no=2 00:04:36.307 14:24:18 -- setup/common.sh@41 -- # local size=1073741824 00:04:36.307 14:24:18 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:36.307 14:24:18 -- setup/common.sh@44 -- # parts=() 00:04:36.307 14:24:18 -- setup/common.sh@44 -- # local parts 00:04:36.307 14:24:18 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:36.307 14:24:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:36.307 14:24:18 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:36.307 14:24:18 -- setup/common.sh@46 -- # (( part++ )) 00:04:36.307 14:24:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:36.307 14:24:18 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:36.307 14:24:18 -- setup/common.sh@46 -- # (( part++ )) 00:04:36.307 14:24:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:36.307 14:24:18 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:36.307 14:24:18 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:36.307 14:24:18 -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:37.245 Creating new GPT entries in memory. 00:04:37.245 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:37.245 other utilities. 00:04:37.245 14:24:19 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:37.245 14:24:19 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:37.245 14:24:19 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:37.245 14:24:19 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:37.245 14:24:19 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:38.184 Creating new GPT entries in memory. 00:04:38.184 The operation has completed successfully. 00:04:38.184 14:24:20 -- setup/common.sh@57 -- # (( part++ )) 00:04:38.184 14:24:20 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:38.184 14:24:20 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:38.184 14:24:20 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:38.184 14:24:20 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:39.123 The operation has completed successfully. 00:04:39.123 14:24:21 -- setup/common.sh@57 -- # (( part++ )) 00:04:39.123 14:24:21 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:39.123 14:24:21 -- setup/common.sh@62 -- # wait 668951 00:04:39.382 14:24:21 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:39.382 14:24:21 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:39.382 14:24:21 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:39.382 14:24:21 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:39.382 14:24:21 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:39.382 14:24:21 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:39.382 14:24:21 -- setup/devices.sh@161 -- # break 00:04:39.382 14:24:21 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:39.382 14:24:21 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:39.382 14:24:21 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:39.382 14:24:21 -- setup/devices.sh@166 -- # dm=dm-0 00:04:39.382 14:24:21 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:39.382 14:24:21 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:39.382 14:24:21 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:39.382 14:24:21 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:04:39.382 14:24:21 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:39.382 14:24:21 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:39.382 14:24:21 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:39.382 14:24:21 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:39.382 14:24:21 -- setup/devices.sh@174 -- # verify 0000:1a:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:39.382 14:24:21 -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:04:39.382 14:24:21 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:39.382 14:24:21 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:39.382 14:24:21 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:39.383 14:24:21 -- setup/devices.sh@53 -- # local found=0 00:04:39.383 14:24:21 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:39.383 14:24:21 -- setup/devices.sh@56 -- # : 00:04:39.383 14:24:21 -- setup/devices.sh@59 -- # local pci status 00:04:39.383 14:24:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.383 14:24:21 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:04:39.383 14:24:21 -- setup/devices.sh@47 -- # setup output config 00:04:39.383 14:24:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.383 14:24:21 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:43.579 14:24:25 -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:43.579 14:24:25 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:43.579 14:24:25 -- setup/devices.sh@63 -- # found=1 00:04:43.579 14:24:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.579 14:24:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:43.579 14:24:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.579 14:24:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:43.579 14:24:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.579 14:24:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:43.579 14:24:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.579 14:24:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:43.579 14:24:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.579 14:24:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:43.579 14:24:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.579 14:24:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:43.579 14:24:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.579 14:24:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:43.579 14:24:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.579 14:24:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:43.579 14:24:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.579 14:24:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:43.579 14:24:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.579 14:24:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:43.579 14:24:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.579 14:24:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:43.579 14:24:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.579 14:24:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:43.579 14:24:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.579 14:24:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:43.579 14:24:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.579 14:24:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:43.579 14:24:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.579 14:24:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:43.579 14:24:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.579 14:24:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:43.579 14:24:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.503 14:24:27 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:45.503 14:24:27 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:45.503 14:24:27 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:45.503 14:24:27 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:45.503 14:24:27 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:45.503 14:24:27 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:45.503 14:24:27 -- setup/devices.sh@184 -- # verify 0000:1a:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:45.503 14:24:27 -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:04:45.503 14:24:27 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:45.503 14:24:27 -- setup/devices.sh@50 -- # local mount_point= 00:04:45.503 14:24:27 -- setup/devices.sh@51 -- # local test_file= 00:04:45.503 14:24:27 -- setup/devices.sh@53 -- # local found=0 00:04:45.503 14:24:27 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:45.503 14:24:27 -- setup/devices.sh@59 -- # local pci status 00:04:45.503 14:24:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.503 14:24:27 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:04:45.503 14:24:27 -- setup/devices.sh@47 -- # setup output config 00:04:45.503 14:24:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.503 14:24:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:48.795 14:24:31 -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.795 14:24:31 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:48.795 14:24:31 -- setup/devices.sh@63 -- # found=1 00:04:48.795 14:24:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.795 14:24:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.795 14:24:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.795 14:24:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.795 14:24:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.795 14:24:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.795 14:24:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.795 14:24:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.795 14:24:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.795 14:24:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.795 14:24:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.795 14:24:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.795 14:24:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.795 14:24:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.795 14:24:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.795 14:24:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.795 14:24:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.795 14:24:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.795 14:24:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.795 14:24:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.795 14:24:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.796 14:24:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.796 14:24:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.796 14:24:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.796 14:24:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.796 14:24:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.796 14:24:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.796 14:24:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.796 14:24:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.796 14:24:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.796 14:24:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.796 14:24:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.796 14:24:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.329 14:24:33 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:51.329 14:24:33 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:51.329 14:24:33 -- setup/devices.sh@68 -- # return 0 00:04:51.329 14:24:33 -- setup/devices.sh@187 -- # cleanup_dm 00:04:51.329 14:24:33 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:51.329 14:24:33 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:51.329 14:24:33 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:51.329 14:24:33 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:51.329 14:24:33 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:51.329 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:51.329 14:24:33 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:51.329 14:24:33 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:51.329 00:04:51.329 real 0m14.842s 00:04:51.329 user 0m3.933s 00:04:51.329 sys 0m7.898s 00:04:51.329 14:24:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.329 14:24:33 -- common/autotest_common.sh@10 -- # set +x 00:04:51.329 ************************************ 00:04:51.329 END TEST dm_mount 00:04:51.329 ************************************ 00:04:51.329 14:24:33 -- setup/devices.sh@1 -- # cleanup 00:04:51.329 14:24:33 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:51.329 14:24:33 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.329 14:24:33 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:51.329 14:24:33 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:51.329 14:24:33 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:51.329 14:24:33 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:51.329 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:51.329 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:51.329 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:51.329 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:51.329 14:24:33 -- setup/devices.sh@12 -- # cleanup_dm 00:04:51.329 14:24:33 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:51.329 14:24:33 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:51.329 14:24:33 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:51.329 14:24:33 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:51.329 14:24:33 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:51.329 14:24:33 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:51.329 00:04:51.329 real 0m41.523s 00:04:51.329 user 0m12.176s 00:04:51.329 sys 0m24.048s 00:04:51.329 14:24:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.329 14:24:33 -- common/autotest_common.sh@10 -- # set +x 00:04:51.329 ************************************ 00:04:51.329 END TEST devices 00:04:51.329 ************************************ 00:04:51.329 00:04:51.329 real 2m29.773s 00:04:51.329 user 0m46.108s 00:04:51.329 sys 1m27.290s 00:04:51.329 14:24:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.329 14:24:33 -- common/autotest_common.sh@10 -- # set +x 00:04:51.329 ************************************ 00:04:51.329 END TEST setup.sh 00:04:51.329 ************************************ 00:04:51.329 14:24:33 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:04:55.525 Hugepages 00:04:55.525 node hugesize free / total 00:04:55.525 node0 1048576kB 0 / 0 00:04:55.525 node0 2048kB 2048 / 2048 00:04:55.525 node1 1048576kB 0 / 0 00:04:55.525 node1 2048kB 0 / 0 00:04:55.525 00:04:55.525 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:55.525 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:55.525 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:55.525 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:55.525 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:55.525 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:55.525 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:55.525 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:55.525 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:55.525 NVMe 0000:1a:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:55.525 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:55.525 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:55.525 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:55.525 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:55.525 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:55.525 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:55.525 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:55.525 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:55.525 14:24:37 -- spdk/autotest.sh@141 -- # uname -s 00:04:55.525 14:24:37 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:55.525 14:24:37 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:55.525 14:24:37 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:58.831 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:58.831 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:58.831 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:58.831 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:58.831 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:58.831 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:58.831 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:58.831 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:58.831 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:58.831 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:58.831 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:58.831 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:58.831 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:58.831 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:58.831 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:58.831 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:02.122 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:05:04.029 14:24:46 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:05.408 14:24:47 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:05.408 14:24:47 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:05.408 14:24:47 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:05.408 14:24:47 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:05.408 14:24:47 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:05.408 14:24:47 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:05.408 14:24:47 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:05.408 14:24:47 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:05.408 14:24:47 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:05.408 14:24:47 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:05.408 14:24:47 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:1a:00.0 00:05:05.408 14:24:47 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:08.696 Waiting for block devices as requested 00:05:08.696 0000:1a:00.0 (8086 0a54): vfio-pci -> nvme 00:05:08.955 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:08.955 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:09.215 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:09.215 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:09.215 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:09.475 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:09.475 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:09.475 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:09.734 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:09.734 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:09.734 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:09.994 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:09.994 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:09.994 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:10.253 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:10.253 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:12.788 14:24:54 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:12.788 14:24:54 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:1a:00.0 00:05:12.788 14:24:54 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:12.788 14:24:54 -- common/autotest_common.sh@1487 -- # grep 0000:1a:00.0/nvme/nvme 00:05:12.788 14:24:54 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 00:05:12.788 14:24:54 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 ]] 00:05:12.788 14:24:54 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 00:05:12.788 14:24:54 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:12.788 14:24:54 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:12.788 14:24:54 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:12.788 14:24:54 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:12.788 14:24:54 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:12.788 14:24:54 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:12.788 14:24:54 -- common/autotest_common.sh@1530 -- # oacs=' 0xe' 00:05:12.788 14:24:54 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:12.788 14:24:54 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:12.788 14:24:54 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:12.788 14:24:54 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:12.788 14:24:54 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:12.789 14:24:54 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:12.789 14:24:54 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:12.789 14:24:54 -- common/autotest_common.sh@1542 -- # continue 00:05:12.789 14:24:54 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:12.789 14:24:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:12.789 14:24:54 -- common/autotest_common.sh@10 -- # set +x 00:05:12.789 14:24:54 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:12.789 14:24:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:12.789 14:24:54 -- common/autotest_common.sh@10 -- # set +x 00:05:12.789 14:24:54 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:16.079 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:16.079 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:16.079 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:16.338 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:16.338 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:16.338 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:16.338 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:16.338 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:16.338 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:16.338 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:16.338 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:16.338 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:16.338 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:16.338 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:16.338 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:16.338 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:19.630 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:05:22.162 14:25:04 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:22.162 14:25:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:22.162 14:25:04 -- common/autotest_common.sh@10 -- # set +x 00:05:22.162 14:25:04 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:22.162 14:25:04 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:22.162 14:25:04 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:22.162 14:25:04 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:22.162 14:25:04 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:22.162 14:25:04 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:22.162 14:25:04 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:22.162 14:25:04 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:22.162 14:25:04 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:22.162 14:25:04 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:22.162 14:25:04 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:22.162 14:25:04 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:22.162 14:25:04 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:1a:00.0 00:05:22.162 14:25:04 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:22.162 14:25:04 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:1a:00.0/device 00:05:22.162 14:25:04 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:05:22.162 14:25:04 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:22.162 14:25:04 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:05:22.162 14:25:04 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:1a:00.0 00:05:22.162 14:25:04 -- common/autotest_common.sh@1577 -- # [[ -z 0000:1a:00.0 ]] 00:05:22.162 14:25:04 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=679497 00:05:22.162 14:25:04 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.162 14:25:04 -- common/autotest_common.sh@1583 -- # waitforlisten 679497 00:05:22.162 14:25:04 -- common/autotest_common.sh@819 -- # '[' -z 679497 ']' 00:05:22.162 14:25:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.162 14:25:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:22.162 14:25:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.163 14:25:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:22.163 14:25:04 -- common/autotest_common.sh@10 -- # set +x 00:05:22.163 [2024-10-01 14:25:04.294872] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:22.163 [2024-10-01 14:25:04.294945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid679497 ] 00:05:22.163 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.163 [2024-10-01 14:25:04.380952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.163 [2024-10-01 14:25:04.461215] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:22.163 [2024-10-01 14:25:04.461325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.729 14:25:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:22.729 14:25:05 -- common/autotest_common.sh@852 -- # return 0 00:05:22.729 14:25:05 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:05:22.729 14:25:05 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:05:22.729 14:25:05 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:1a:00.0 00:05:26.014 nvme0n1 00:05:26.015 14:25:08 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:26.015 [2024-10-01 14:25:08.306912] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:26.015 request: 00:05:26.015 { 00:05:26.015 "nvme_ctrlr_name": "nvme0", 00:05:26.015 "password": "test", 00:05:26.015 "method": "bdev_nvme_opal_revert", 00:05:26.015 "req_id": 1 00:05:26.015 } 00:05:26.015 Got JSON-RPC error response 00:05:26.015 response: 00:05:26.015 { 00:05:26.015 "code": -32602, 00:05:26.015 "message": "Invalid parameters" 00:05:26.015 } 00:05:26.015 14:25:08 -- common/autotest_common.sh@1589 -- # true 00:05:26.015 14:25:08 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:05:26.015 14:25:08 -- common/autotest_common.sh@1593 -- # killprocess 679497 00:05:26.015 14:25:08 -- common/autotest_common.sh@926 -- # '[' -z 679497 ']' 00:05:26.015 14:25:08 -- common/autotest_common.sh@930 -- # kill -0 679497 00:05:26.015 14:25:08 -- common/autotest_common.sh@931 -- # uname 00:05:26.015 14:25:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:26.015 14:25:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 679497 00:05:26.015 14:25:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:26.015 14:25:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:26.015 14:25:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 679497' 00:05:26.015 killing process with pid 679497 00:05:26.015 14:25:08 -- common/autotest_common.sh@945 -- # kill 679497 00:05:26.015 14:25:08 -- common/autotest_common.sh@950 -- # wait 679497 00:05:30.209 14:25:12 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:30.209 14:25:12 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:30.209 14:25:12 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:30.209 14:25:12 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:30.209 14:25:12 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:30.209 14:25:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:30.209 14:25:12 -- common/autotest_common.sh@10 -- # set +x 00:05:30.209 14:25:12 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:30.209 14:25:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:30.209 14:25:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.209 14:25:12 -- common/autotest_common.sh@10 -- # set +x 00:05:30.209 ************************************ 00:05:30.209 START TEST env 00:05:30.209 ************************************ 00:05:30.209 14:25:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:30.209 * Looking for test storage... 00:05:30.209 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:05:30.209 14:25:12 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:30.209 14:25:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:30.209 14:25:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.209 14:25:12 -- common/autotest_common.sh@10 -- # set +x 00:05:30.209 ************************************ 00:05:30.209 START TEST env_memory 00:05:30.209 ************************************ 00:05:30.210 14:25:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:30.210 00:05:30.210 00:05:30.210 CUnit - A unit testing framework for C - Version 2.1-3 00:05:30.210 http://cunit.sourceforge.net/ 00:05:30.210 00:05:30.210 00:05:30.210 Suite: memory 00:05:30.210 Test: alloc and free memory map ...[2024-10-01 14:25:12.538058] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:30.210 passed 00:05:30.210 Test: mem map translation ...[2024-10-01 14:25:12.551550] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:30.210 [2024-10-01 14:25:12.551570] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:30.210 [2024-10-01 14:25:12.551599] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:30.210 [2024-10-01 14:25:12.551609] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:30.210 passed 00:05:30.210 Test: mem map registration ...[2024-10-01 14:25:12.571713] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:30.210 [2024-10-01 14:25:12.571741] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:30.210 passed 00:05:30.210 Test: mem map adjacent registrations ...passed 00:05:30.210 00:05:30.210 Run Summary: Type Total Ran Passed Failed Inactive 00:05:30.210 suites 1 1 n/a 0 0 00:05:30.210 tests 4 4 4 0 0 00:05:30.210 asserts 152 152 152 0 n/a 00:05:30.210 00:05:30.210 Elapsed time = 0.084 seconds 00:05:30.210 00:05:30.210 real 0m0.097s 00:05:30.210 user 0m0.085s 00:05:30.210 sys 0m0.012s 00:05:30.210 14:25:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.210 14:25:12 -- common/autotest_common.sh@10 -- # set +x 00:05:30.210 ************************************ 00:05:30.210 END TEST env_memory 00:05:30.210 ************************************ 00:05:30.210 14:25:12 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:30.210 14:25:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:30.210 14:25:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.210 14:25:12 -- common/autotest_common.sh@10 -- # set +x 00:05:30.210 ************************************ 00:05:30.210 START TEST env_vtophys 00:05:30.210 ************************************ 00:05:30.210 14:25:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:30.210 EAL: lib.eal log level changed from notice to debug 00:05:30.210 EAL: Detected lcore 0 as core 0 on socket 0 00:05:30.210 EAL: Detected lcore 1 as core 1 on socket 0 00:05:30.210 EAL: Detected lcore 2 as core 2 on socket 0 00:05:30.210 EAL: Detected lcore 3 as core 3 on socket 0 00:05:30.210 EAL: Detected lcore 4 as core 4 on socket 0 00:05:30.210 EAL: Detected lcore 5 as core 8 on socket 0 00:05:30.210 EAL: Detected lcore 6 as core 9 on socket 0 00:05:30.210 EAL: Detected lcore 7 as core 10 on socket 0 00:05:30.210 EAL: Detected lcore 8 as core 11 on socket 0 00:05:30.210 EAL: Detected lcore 9 as core 16 on socket 0 00:05:30.210 EAL: Detected lcore 10 as core 17 on socket 0 00:05:30.210 EAL: Detected lcore 11 as core 18 on socket 0 00:05:30.210 EAL: Detected lcore 12 as core 19 on socket 0 00:05:30.210 EAL: Detected lcore 13 as core 20 on socket 0 00:05:30.210 EAL: Detected lcore 14 as core 24 on socket 0 00:05:30.210 EAL: Detected lcore 15 as core 25 on socket 0 00:05:30.210 EAL: Detected lcore 16 as core 26 on socket 0 00:05:30.210 EAL: Detected lcore 17 as core 27 on socket 0 00:05:30.210 EAL: Detected lcore 18 as core 0 on socket 1 00:05:30.210 EAL: Detected lcore 19 as core 1 on socket 1 00:05:30.210 EAL: Detected lcore 20 as core 2 on socket 1 00:05:30.210 EAL: Detected lcore 21 as core 3 on socket 1 00:05:30.210 EAL: Detected lcore 22 as core 4 on socket 1 00:05:30.210 EAL: Detected lcore 23 as core 8 on socket 1 00:05:30.210 EAL: Detected lcore 24 as core 9 on socket 1 00:05:30.210 EAL: Detected lcore 25 as core 10 on socket 1 00:05:30.210 EAL: Detected lcore 26 as core 11 on socket 1 00:05:30.210 EAL: Detected lcore 27 as core 16 on socket 1 00:05:30.210 EAL: Detected lcore 28 as core 17 on socket 1 00:05:30.210 EAL: Detected lcore 29 as core 18 on socket 1 00:05:30.210 EAL: Detected lcore 30 as core 19 on socket 1 00:05:30.210 EAL: Detected lcore 31 as core 20 on socket 1 00:05:30.210 EAL: Detected lcore 32 as core 24 on socket 1 00:05:30.210 EAL: Detected lcore 33 as core 25 on socket 1 00:05:30.210 EAL: Detected lcore 34 as core 26 on socket 1 00:05:30.210 EAL: Detected lcore 35 as core 27 on socket 1 00:05:30.210 EAL: Detected lcore 36 as core 0 on socket 0 00:05:30.210 EAL: Detected lcore 37 as core 1 on socket 0 00:05:30.210 EAL: Detected lcore 38 as core 2 on socket 0 00:05:30.210 EAL: Detected lcore 39 as core 3 on socket 0 00:05:30.210 EAL: Detected lcore 40 as core 4 on socket 0 00:05:30.210 EAL: Detected lcore 41 as core 8 on socket 0 00:05:30.210 EAL: Detected lcore 42 as core 9 on socket 0 00:05:30.210 EAL: Detected lcore 43 as core 10 on socket 0 00:05:30.210 EAL: Detected lcore 44 as core 11 on socket 0 00:05:30.210 EAL: Detected lcore 45 as core 16 on socket 0 00:05:30.210 EAL: Detected lcore 46 as core 17 on socket 0 00:05:30.210 EAL: Detected lcore 47 as core 18 on socket 0 00:05:30.210 EAL: Detected lcore 48 as core 19 on socket 0 00:05:30.210 EAL: Detected lcore 49 as core 20 on socket 0 00:05:30.210 EAL: Detected lcore 50 as core 24 on socket 0 00:05:30.210 EAL: Detected lcore 51 as core 25 on socket 0 00:05:30.210 EAL: Detected lcore 52 as core 26 on socket 0 00:05:30.210 EAL: Detected lcore 53 as core 27 on socket 0 00:05:30.210 EAL: Detected lcore 54 as core 0 on socket 1 00:05:30.210 EAL: Detected lcore 55 as core 1 on socket 1 00:05:30.210 EAL: Detected lcore 56 as core 2 on socket 1 00:05:30.210 EAL: Detected lcore 57 as core 3 on socket 1 00:05:30.210 EAL: Detected lcore 58 as core 4 on socket 1 00:05:30.210 EAL: Detected lcore 59 as core 8 on socket 1 00:05:30.210 EAL: Detected lcore 60 as core 9 on socket 1 00:05:30.210 EAL: Detected lcore 61 as core 10 on socket 1 00:05:30.210 EAL: Detected lcore 62 as core 11 on socket 1 00:05:30.210 EAL: Detected lcore 63 as core 16 on socket 1 00:05:30.210 EAL: Detected lcore 64 as core 17 on socket 1 00:05:30.210 EAL: Detected lcore 65 as core 18 on socket 1 00:05:30.210 EAL: Detected lcore 66 as core 19 on socket 1 00:05:30.210 EAL: Detected lcore 67 as core 20 on socket 1 00:05:30.210 EAL: Detected lcore 68 as core 24 on socket 1 00:05:30.210 EAL: Detected lcore 69 as core 25 on socket 1 00:05:30.210 EAL: Detected lcore 70 as core 26 on socket 1 00:05:30.210 EAL: Detected lcore 71 as core 27 on socket 1 00:05:30.210 EAL: Maximum logical cores by configuration: 128 00:05:30.210 EAL: Detected CPU lcores: 72 00:05:30.210 EAL: Detected NUMA nodes: 2 00:05:30.210 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:30.210 EAL: Checking presence of .so 'librte_eal.so.24' 00:05:30.210 EAL: Checking presence of .so 'librte_eal.so' 00:05:30.210 EAL: Detected static linkage of DPDK 00:05:30.210 EAL: No shared files mode enabled, IPC will be disabled 00:05:30.210 EAL: Bus pci wants IOVA as 'DC' 00:05:30.210 EAL: Buses did not request a specific IOVA mode. 00:05:30.210 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:30.210 EAL: Selected IOVA mode 'VA' 00:05:30.210 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.210 EAL: Probing VFIO support... 00:05:30.210 EAL: IOMMU type 1 (Type 1) is supported 00:05:30.210 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:30.210 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:30.210 EAL: VFIO support initialized 00:05:30.210 EAL: Ask a virtual area of 0x2e000 bytes 00:05:30.210 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:30.210 EAL: Setting up physically contiguous memory... 00:05:30.210 EAL: Setting maximum number of open files to 524288 00:05:30.210 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:30.210 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:30.210 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:30.210 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.210 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:30.210 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.210 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.211 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:30.211 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:30.211 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.211 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:30.211 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.211 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.211 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:30.211 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:30.211 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.211 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:30.211 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.211 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.211 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:30.211 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:30.211 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.211 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:30.211 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.211 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.211 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:30.211 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:30.211 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:30.211 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.211 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:30.211 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:30.211 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.211 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:30.211 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:30.211 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.211 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:30.211 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:30.211 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.211 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:30.211 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:30.211 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.211 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:30.211 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:30.211 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.211 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:30.211 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:30.211 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.211 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:30.211 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:30.211 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.211 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:30.211 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:30.211 EAL: Hugepages will be freed exactly as allocated. 00:05:30.211 EAL: No shared files mode enabled, IPC is disabled 00:05:30.211 EAL: No shared files mode enabled, IPC is disabled 00:05:30.211 EAL: TSC frequency is ~2300000 KHz 00:05:30.211 EAL: Main lcore 0 is ready (tid=7fe705b0ca00;cpuset=[0]) 00:05:30.211 EAL: Trying to obtain current memory policy. 00:05:30.211 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.211 EAL: Restoring previous memory policy: 0 00:05:30.211 EAL: request: mp_malloc_sync 00:05:30.211 EAL: No shared files mode enabled, IPC is disabled 00:05:30.211 EAL: Heap on socket 0 was expanded by 2MB 00:05:30.211 EAL: No shared files mode enabled, IPC is disabled 00:05:30.471 EAL: Mem event callback 'spdk:(nil)' registered 00:05:30.471 00:05:30.471 00:05:30.471 CUnit - A unit testing framework for C - Version 2.1-3 00:05:30.471 http://cunit.sourceforge.net/ 00:05:30.471 00:05:30.471 00:05:30.471 Suite: components_suite 00:05:30.471 Test: vtophys_malloc_test ...passed 00:05:30.471 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:30.471 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.471 EAL: Restoring previous memory policy: 4 00:05:30.471 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.471 EAL: request: mp_malloc_sync 00:05:30.471 EAL: No shared files mode enabled, IPC is disabled 00:05:30.471 EAL: Heap on socket 0 was expanded by 4MB 00:05:30.471 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.471 EAL: request: mp_malloc_sync 00:05:30.471 EAL: No shared files mode enabled, IPC is disabled 00:05:30.471 EAL: Heap on socket 0 was shrunk by 4MB 00:05:30.471 EAL: Trying to obtain current memory policy. 00:05:30.471 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.471 EAL: Restoring previous memory policy: 4 00:05:30.471 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.471 EAL: request: mp_malloc_sync 00:05:30.471 EAL: No shared files mode enabled, IPC is disabled 00:05:30.471 EAL: Heap on socket 0 was expanded by 6MB 00:05:30.471 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.471 EAL: request: mp_malloc_sync 00:05:30.471 EAL: No shared files mode enabled, IPC is disabled 00:05:30.471 EAL: Heap on socket 0 was shrunk by 6MB 00:05:30.471 EAL: Trying to obtain current memory policy. 00:05:30.471 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.471 EAL: Restoring previous memory policy: 4 00:05:30.471 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.471 EAL: request: mp_malloc_sync 00:05:30.471 EAL: No shared files mode enabled, IPC is disabled 00:05:30.471 EAL: Heap on socket 0 was expanded by 10MB 00:05:30.471 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.471 EAL: request: mp_malloc_sync 00:05:30.471 EAL: No shared files mode enabled, IPC is disabled 00:05:30.471 EAL: Heap on socket 0 was shrunk by 10MB 00:05:30.471 EAL: Trying to obtain current memory policy. 00:05:30.471 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.471 EAL: Restoring previous memory policy: 4 00:05:30.471 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.471 EAL: request: mp_malloc_sync 00:05:30.471 EAL: No shared files mode enabled, IPC is disabled 00:05:30.471 EAL: Heap on socket 0 was expanded by 18MB 00:05:30.471 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.471 EAL: request: mp_malloc_sync 00:05:30.471 EAL: No shared files mode enabled, IPC is disabled 00:05:30.471 EAL: Heap on socket 0 was shrunk by 18MB 00:05:30.471 EAL: Trying to obtain current memory policy. 00:05:30.471 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.471 EAL: Restoring previous memory policy: 4 00:05:30.471 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.471 EAL: request: mp_malloc_sync 00:05:30.471 EAL: No shared files mode enabled, IPC is disabled 00:05:30.471 EAL: Heap on socket 0 was expanded by 34MB 00:05:30.471 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.471 EAL: request: mp_malloc_sync 00:05:30.471 EAL: No shared files mode enabled, IPC is disabled 00:05:30.471 EAL: Heap on socket 0 was shrunk by 34MB 00:05:30.471 EAL: Trying to obtain current memory policy. 00:05:30.471 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.471 EAL: Restoring previous memory policy: 4 00:05:30.471 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.471 EAL: request: mp_malloc_sync 00:05:30.471 EAL: No shared files mode enabled, IPC is disabled 00:05:30.471 EAL: Heap on socket 0 was expanded by 66MB 00:05:30.471 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.471 EAL: request: mp_malloc_sync 00:05:30.471 EAL: No shared files mode enabled, IPC is disabled 00:05:30.471 EAL: Heap on socket 0 was shrunk by 66MB 00:05:30.471 EAL: Trying to obtain current memory policy. 00:05:30.471 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.471 EAL: Restoring previous memory policy: 4 00:05:30.471 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.471 EAL: request: mp_malloc_sync 00:05:30.471 EAL: No shared files mode enabled, IPC is disabled 00:05:30.471 EAL: Heap on socket 0 was expanded by 130MB 00:05:30.471 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.471 EAL: request: mp_malloc_sync 00:05:30.471 EAL: No shared files mode enabled, IPC is disabled 00:05:30.471 EAL: Heap on socket 0 was shrunk by 130MB 00:05:30.471 EAL: Trying to obtain current memory policy. 00:05:30.471 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.471 EAL: Restoring previous memory policy: 4 00:05:30.471 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.471 EAL: request: mp_malloc_sync 00:05:30.471 EAL: No shared files mode enabled, IPC is disabled 00:05:30.471 EAL: Heap on socket 0 was expanded by 258MB 00:05:30.471 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.731 EAL: request: mp_malloc_sync 00:05:30.731 EAL: No shared files mode enabled, IPC is disabled 00:05:30.731 EAL: Heap on socket 0 was shrunk by 258MB 00:05:30.731 EAL: Trying to obtain current memory policy. 00:05:30.731 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.731 EAL: Restoring previous memory policy: 4 00:05:30.731 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.731 EAL: request: mp_malloc_sync 00:05:30.731 EAL: No shared files mode enabled, IPC is disabled 00:05:30.731 EAL: Heap on socket 0 was expanded by 514MB 00:05:30.731 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.990 EAL: request: mp_malloc_sync 00:05:30.990 EAL: No shared files mode enabled, IPC is disabled 00:05:30.990 EAL: Heap on socket 0 was shrunk by 514MB 00:05:30.990 EAL: Trying to obtain current memory policy. 00:05:30.990 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.250 EAL: Restoring previous memory policy: 4 00:05:31.250 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.250 EAL: request: mp_malloc_sync 00:05:31.250 EAL: No shared files mode enabled, IPC is disabled 00:05:31.250 EAL: Heap on socket 0 was expanded by 1026MB 00:05:31.250 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.509 EAL: request: mp_malloc_sync 00:05:31.509 EAL: No shared files mode enabled, IPC is disabled 00:05:31.509 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:31.509 passed 00:05:31.509 00:05:31.509 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.509 suites 1 1 n/a 0 0 00:05:31.509 tests 2 2 2 0 0 00:05:31.509 asserts 497 497 497 0 n/a 00:05:31.509 00:05:31.509 Elapsed time = 1.131 seconds 00:05:31.509 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.509 EAL: request: mp_malloc_sync 00:05:31.509 EAL: No shared files mode enabled, IPC is disabled 00:05:31.509 EAL: Heap on socket 0 was shrunk by 2MB 00:05:31.509 EAL: No shared files mode enabled, IPC is disabled 00:05:31.509 EAL: No shared files mode enabled, IPC is disabled 00:05:31.509 EAL: No shared files mode enabled, IPC is disabled 00:05:31.509 00:05:31.509 real 0m1.273s 00:05:31.509 user 0m0.719s 00:05:31.509 sys 0m0.527s 00:05:31.509 14:25:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.509 14:25:13 -- common/autotest_common.sh@10 -- # set +x 00:05:31.509 ************************************ 00:05:31.509 END TEST env_vtophys 00:05:31.509 ************************************ 00:05:31.509 14:25:13 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:31.509 14:25:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:31.509 14:25:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.509 14:25:13 -- common/autotest_common.sh@10 -- # set +x 00:05:31.509 ************************************ 00:05:31.509 START TEST env_pci 00:05:31.509 ************************************ 00:05:31.509 14:25:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:31.509 00:05:31.509 00:05:31.509 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.509 http://cunit.sourceforge.net/ 00:05:31.509 00:05:31.509 00:05:31.509 Suite: pci 00:05:31.509 Test: pci_hook ...[2024-10-01 14:25:13.985092] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 680833 has claimed it 00:05:31.509 EAL: Cannot find device (10000:00:01.0) 00:05:31.509 EAL: Failed to attach device on primary process 00:05:31.509 passed 00:05:31.509 00:05:31.509 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.509 suites 1 1 n/a 0 0 00:05:31.509 tests 1 1 1 0 0 00:05:31.509 asserts 25 25 25 0 n/a 00:05:31.509 00:05:31.509 Elapsed time = 0.035 seconds 00:05:31.509 00:05:31.509 real 0m0.054s 00:05:31.509 user 0m0.014s 00:05:31.509 sys 0m0.040s 00:05:31.509 14:25:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.509 14:25:14 -- common/autotest_common.sh@10 -- # set +x 00:05:31.509 ************************************ 00:05:31.509 END TEST env_pci 00:05:31.510 ************************************ 00:05:31.769 14:25:14 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:31.769 14:25:14 -- env/env.sh@15 -- # uname 00:05:31.769 14:25:14 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:31.769 14:25:14 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:31.769 14:25:14 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:31.769 14:25:14 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:31.769 14:25:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.769 14:25:14 -- common/autotest_common.sh@10 -- # set +x 00:05:31.769 ************************************ 00:05:31.769 START TEST env_dpdk_post_init 00:05:31.769 ************************************ 00:05:31.769 14:25:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:31.769 EAL: Detected CPU lcores: 72 00:05:31.769 EAL: Detected NUMA nodes: 2 00:05:31.769 EAL: Detected static linkage of DPDK 00:05:31.769 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:31.769 EAL: Selected IOVA mode 'VA' 00:05:31.769 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.769 EAL: VFIO support initialized 00:05:31.769 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:31.769 EAL: Using IOMMU type 1 (Type 1) 00:05:32.707 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:1a:00.0 (socket 0) 00:05:38.129 EAL: Releasing PCI mapped resource for 0000:1a:00.0 00:05:38.129 EAL: Calling pci_unmap_resource for 0000:1a:00.0 at 0x202001000000 00:05:38.129 Starting DPDK initialization... 00:05:38.129 Starting SPDK post initialization... 00:05:38.129 SPDK NVMe probe 00:05:38.129 Attaching to 0000:1a:00.0 00:05:38.129 Attached to 0000:1a:00.0 00:05:38.129 Cleaning up... 00:05:38.129 00:05:38.129 real 0m6.537s 00:05:38.130 user 0m5.026s 00:05:38.130 sys 0m0.762s 00:05:38.130 14:25:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.130 14:25:20 -- common/autotest_common.sh@10 -- # set +x 00:05:38.130 ************************************ 00:05:38.130 END TEST env_dpdk_post_init 00:05:38.130 ************************************ 00:05:38.389 14:25:20 -- env/env.sh@26 -- # uname 00:05:38.389 14:25:20 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:38.389 14:25:20 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:38.389 14:25:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:38.389 14:25:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.389 14:25:20 -- common/autotest_common.sh@10 -- # set +x 00:05:38.389 ************************************ 00:05:38.389 START TEST env_mem_callbacks 00:05:38.389 ************************************ 00:05:38.389 14:25:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:38.389 EAL: Detected CPU lcores: 72 00:05:38.389 EAL: Detected NUMA nodes: 2 00:05:38.389 EAL: Detected static linkage of DPDK 00:05:38.389 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:38.389 EAL: Selected IOVA mode 'VA' 00:05:38.389 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.389 EAL: VFIO support initialized 00:05:38.389 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:38.389 00:05:38.389 00:05:38.389 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.389 http://cunit.sourceforge.net/ 00:05:38.389 00:05:38.389 00:05:38.389 Suite: memory 00:05:38.389 Test: test ... 00:05:38.389 register 0x200000200000 2097152 00:05:38.389 malloc 3145728 00:05:38.389 register 0x200000400000 4194304 00:05:38.389 buf 0x200000500000 len 3145728 PASSED 00:05:38.389 malloc 64 00:05:38.389 buf 0x2000004fff40 len 64 PASSED 00:05:38.389 malloc 4194304 00:05:38.389 register 0x200000800000 6291456 00:05:38.389 buf 0x200000a00000 len 4194304 PASSED 00:05:38.389 free 0x200000500000 3145728 00:05:38.389 free 0x2000004fff40 64 00:05:38.389 unregister 0x200000400000 4194304 PASSED 00:05:38.389 free 0x200000a00000 4194304 00:05:38.389 unregister 0x200000800000 6291456 PASSED 00:05:38.389 malloc 8388608 00:05:38.389 register 0x200000400000 10485760 00:05:38.389 buf 0x200000600000 len 8388608 PASSED 00:05:38.389 free 0x200000600000 8388608 00:05:38.389 unregister 0x200000400000 10485760 PASSED 00:05:38.389 passed 00:05:38.389 00:05:38.389 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.389 suites 1 1 n/a 0 0 00:05:38.389 tests 1 1 1 0 0 00:05:38.389 asserts 15 15 15 0 n/a 00:05:38.389 00:05:38.389 Elapsed time = 0.009 seconds 00:05:38.389 00:05:38.389 real 0m0.075s 00:05:38.389 user 0m0.018s 00:05:38.389 sys 0m0.057s 00:05:38.389 14:25:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.389 14:25:20 -- common/autotest_common.sh@10 -- # set +x 00:05:38.389 ************************************ 00:05:38.389 END TEST env_mem_callbacks 00:05:38.389 ************************************ 00:05:38.389 00:05:38.389 real 0m8.397s 00:05:38.389 user 0m5.986s 00:05:38.389 sys 0m1.687s 00:05:38.389 14:25:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.389 14:25:20 -- common/autotest_common.sh@10 -- # set +x 00:05:38.389 ************************************ 00:05:38.389 END TEST env 00:05:38.389 ************************************ 00:05:38.389 14:25:20 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:38.389 14:25:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:38.389 14:25:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.389 14:25:20 -- common/autotest_common.sh@10 -- # set +x 00:05:38.389 ************************************ 00:05:38.389 START TEST rpc 00:05:38.389 ************************************ 00:05:38.389 14:25:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:38.648 * Looking for test storage... 00:05:38.648 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:38.648 14:25:20 -- rpc/rpc.sh@65 -- # spdk_pid=681876 00:05:38.648 14:25:20 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.648 14:25:20 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:38.648 14:25:20 -- rpc/rpc.sh@67 -- # waitforlisten 681876 00:05:38.648 14:25:20 -- common/autotest_common.sh@819 -- # '[' -z 681876 ']' 00:05:38.648 14:25:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.648 14:25:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:38.648 14:25:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.648 14:25:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:38.648 14:25:20 -- common/autotest_common.sh@10 -- # set +x 00:05:38.648 [2024-10-01 14:25:20.971655] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:38.648 [2024-10-01 14:25:20.971739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid681876 ] 00:05:38.648 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.648 [2024-10-01 14:25:21.059755] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.648 [2024-10-01 14:25:21.151845] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:38.648 [2024-10-01 14:25:21.151957] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:38.648 [2024-10-01 14:25:21.151969] app.c: 492:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 681876' to capture a snapshot of events at runtime. 00:05:38.648 [2024-10-01 14:25:21.151978] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid681876 for offline analysis/debug. 00:05:38.648 [2024-10-01 14:25:21.151998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.581 14:25:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:39.581 14:25:21 -- common/autotest_common.sh@852 -- # return 0 00:05:39.581 14:25:21 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:39.581 14:25:21 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:39.581 14:25:21 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:39.581 14:25:21 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:39.581 14:25:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.581 14:25:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.581 14:25:21 -- common/autotest_common.sh@10 -- # set +x 00:05:39.581 ************************************ 00:05:39.581 START TEST rpc_integrity 00:05:39.581 ************************************ 00:05:39.581 14:25:21 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:39.581 14:25:21 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:39.581 14:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:39.581 14:25:21 -- common/autotest_common.sh@10 -- # set +x 00:05:39.581 14:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:39.581 14:25:21 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:39.581 14:25:21 -- rpc/rpc.sh@13 -- # jq length 00:05:39.581 14:25:21 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:39.581 14:25:21 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:39.581 14:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:39.581 14:25:21 -- common/autotest_common.sh@10 -- # set +x 00:05:39.581 14:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:39.581 14:25:21 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:39.581 14:25:21 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:39.581 14:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:39.581 14:25:21 -- common/autotest_common.sh@10 -- # set +x 00:05:39.581 14:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:39.581 14:25:21 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:39.581 { 00:05:39.581 "name": "Malloc0", 00:05:39.581 "aliases": [ 00:05:39.581 "97788f3b-4180-4f26-bd75-2d44c098b34d" 00:05:39.581 ], 00:05:39.581 "product_name": "Malloc disk", 00:05:39.581 "block_size": 512, 00:05:39.581 "num_blocks": 16384, 00:05:39.581 "uuid": "97788f3b-4180-4f26-bd75-2d44c098b34d", 00:05:39.581 "assigned_rate_limits": { 00:05:39.581 "rw_ios_per_sec": 0, 00:05:39.581 "rw_mbytes_per_sec": 0, 00:05:39.581 "r_mbytes_per_sec": 0, 00:05:39.581 "w_mbytes_per_sec": 0 00:05:39.581 }, 00:05:39.581 "claimed": false, 00:05:39.581 "zoned": false, 00:05:39.581 "supported_io_types": { 00:05:39.581 "read": true, 00:05:39.581 "write": true, 00:05:39.581 "unmap": true, 00:05:39.581 "write_zeroes": true, 00:05:39.582 "flush": true, 00:05:39.582 "reset": true, 00:05:39.582 "compare": false, 00:05:39.582 "compare_and_write": false, 00:05:39.582 "abort": true, 00:05:39.582 "nvme_admin": false, 00:05:39.582 "nvme_io": false 00:05:39.582 }, 00:05:39.582 "memory_domains": [ 00:05:39.582 { 00:05:39.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.582 "dma_device_type": 2 00:05:39.582 } 00:05:39.582 ], 00:05:39.582 "driver_specific": {} 00:05:39.582 } 00:05:39.582 ]' 00:05:39.582 14:25:21 -- rpc/rpc.sh@17 -- # jq length 00:05:39.582 14:25:21 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:39.582 14:25:21 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:39.582 14:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:39.582 14:25:21 -- common/autotest_common.sh@10 -- # set +x 00:05:39.582 [2024-10-01 14:25:21.970842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:39.582 [2024-10-01 14:25:21.970881] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:39.582 [2024-10-01 14:25:21.970906] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x50d3680 00:05:39.582 [2024-10-01 14:25:21.970918] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:39.582 [2024-10-01 14:25:21.971799] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:39.582 [2024-10-01 14:25:21.971823] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:39.582 Passthru0 00:05:39.582 14:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:39.582 14:25:21 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:39.582 14:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:39.582 14:25:21 -- common/autotest_common.sh@10 -- # set +x 00:05:39.582 14:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:39.582 14:25:21 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:39.582 { 00:05:39.582 "name": "Malloc0", 00:05:39.582 "aliases": [ 00:05:39.582 "97788f3b-4180-4f26-bd75-2d44c098b34d" 00:05:39.582 ], 00:05:39.582 "product_name": "Malloc disk", 00:05:39.582 "block_size": 512, 00:05:39.582 "num_blocks": 16384, 00:05:39.582 "uuid": "97788f3b-4180-4f26-bd75-2d44c098b34d", 00:05:39.582 "assigned_rate_limits": { 00:05:39.582 "rw_ios_per_sec": 0, 00:05:39.582 "rw_mbytes_per_sec": 0, 00:05:39.582 "r_mbytes_per_sec": 0, 00:05:39.582 "w_mbytes_per_sec": 0 00:05:39.582 }, 00:05:39.582 "claimed": true, 00:05:39.582 "claim_type": "exclusive_write", 00:05:39.582 "zoned": false, 00:05:39.582 "supported_io_types": { 00:05:39.582 "read": true, 00:05:39.582 "write": true, 00:05:39.582 "unmap": true, 00:05:39.582 "write_zeroes": true, 00:05:39.582 "flush": true, 00:05:39.582 "reset": true, 00:05:39.582 "compare": false, 00:05:39.582 "compare_and_write": false, 00:05:39.582 "abort": true, 00:05:39.582 "nvme_admin": false, 00:05:39.582 "nvme_io": false 00:05:39.582 }, 00:05:39.582 "memory_domains": [ 00:05:39.582 { 00:05:39.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.582 "dma_device_type": 2 00:05:39.582 } 00:05:39.582 ], 00:05:39.582 "driver_specific": {} 00:05:39.582 }, 00:05:39.582 { 00:05:39.582 "name": "Passthru0", 00:05:39.582 "aliases": [ 00:05:39.582 "7d181d34-6484-5351-bc7e-217423c84296" 00:05:39.582 ], 00:05:39.582 "product_name": "passthru", 00:05:39.582 "block_size": 512, 00:05:39.582 "num_blocks": 16384, 00:05:39.582 "uuid": "7d181d34-6484-5351-bc7e-217423c84296", 00:05:39.582 "assigned_rate_limits": { 00:05:39.582 "rw_ios_per_sec": 0, 00:05:39.582 "rw_mbytes_per_sec": 0, 00:05:39.582 "r_mbytes_per_sec": 0, 00:05:39.582 "w_mbytes_per_sec": 0 00:05:39.582 }, 00:05:39.582 "claimed": false, 00:05:39.582 "zoned": false, 00:05:39.582 "supported_io_types": { 00:05:39.582 "read": true, 00:05:39.582 "write": true, 00:05:39.582 "unmap": true, 00:05:39.582 "write_zeroes": true, 00:05:39.582 "flush": true, 00:05:39.582 "reset": true, 00:05:39.582 "compare": false, 00:05:39.582 "compare_and_write": false, 00:05:39.582 "abort": true, 00:05:39.582 "nvme_admin": false, 00:05:39.582 "nvme_io": false 00:05:39.582 }, 00:05:39.582 "memory_domains": [ 00:05:39.582 { 00:05:39.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.582 "dma_device_type": 2 00:05:39.582 } 00:05:39.582 ], 00:05:39.582 "driver_specific": { 00:05:39.582 "passthru": { 00:05:39.582 "name": "Passthru0", 00:05:39.582 "base_bdev_name": "Malloc0" 00:05:39.582 } 00:05:39.582 } 00:05:39.582 } 00:05:39.582 ]' 00:05:39.582 14:25:21 -- rpc/rpc.sh@21 -- # jq length 00:05:39.582 14:25:22 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:39.582 14:25:22 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:39.582 14:25:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:39.582 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:39.582 14:25:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:39.582 14:25:22 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:39.582 14:25:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:39.582 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:39.582 14:25:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:39.582 14:25:22 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:39.582 14:25:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:39.582 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:39.582 14:25:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:39.582 14:25:22 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:39.582 14:25:22 -- rpc/rpc.sh@26 -- # jq length 00:05:39.840 14:25:22 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:39.840 00:05:39.840 real 0m0.264s 00:05:39.840 user 0m0.168s 00:05:39.840 sys 0m0.037s 00:05:39.840 14:25:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.840 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:39.840 ************************************ 00:05:39.840 END TEST rpc_integrity 00:05:39.840 ************************************ 00:05:39.840 14:25:22 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:39.840 14:25:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.840 14:25:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.840 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:39.840 ************************************ 00:05:39.840 START TEST rpc_plugins 00:05:39.840 ************************************ 00:05:39.840 14:25:22 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:39.840 14:25:22 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:39.840 14:25:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:39.840 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:39.840 14:25:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:39.840 14:25:22 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:39.840 14:25:22 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:39.840 14:25:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:39.840 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:39.840 14:25:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:39.840 14:25:22 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:39.840 { 00:05:39.840 "name": "Malloc1", 00:05:39.840 "aliases": [ 00:05:39.840 "66b169b3-42ec-497a-9e43-fd01aec9217f" 00:05:39.840 ], 00:05:39.840 "product_name": "Malloc disk", 00:05:39.840 "block_size": 4096, 00:05:39.840 "num_blocks": 256, 00:05:39.840 "uuid": "66b169b3-42ec-497a-9e43-fd01aec9217f", 00:05:39.840 "assigned_rate_limits": { 00:05:39.840 "rw_ios_per_sec": 0, 00:05:39.840 "rw_mbytes_per_sec": 0, 00:05:39.840 "r_mbytes_per_sec": 0, 00:05:39.840 "w_mbytes_per_sec": 0 00:05:39.840 }, 00:05:39.840 "claimed": false, 00:05:39.840 "zoned": false, 00:05:39.840 "supported_io_types": { 00:05:39.840 "read": true, 00:05:39.840 "write": true, 00:05:39.840 "unmap": true, 00:05:39.840 "write_zeroes": true, 00:05:39.840 "flush": true, 00:05:39.840 "reset": true, 00:05:39.840 "compare": false, 00:05:39.840 "compare_and_write": false, 00:05:39.840 "abort": true, 00:05:39.840 "nvme_admin": false, 00:05:39.840 "nvme_io": false 00:05:39.840 }, 00:05:39.840 "memory_domains": [ 00:05:39.841 { 00:05:39.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.841 "dma_device_type": 2 00:05:39.841 } 00:05:39.841 ], 00:05:39.841 "driver_specific": {} 00:05:39.841 } 00:05:39.841 ]' 00:05:39.841 14:25:22 -- rpc/rpc.sh@32 -- # jq length 00:05:39.841 14:25:22 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:39.841 14:25:22 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:39.841 14:25:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:39.841 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:39.841 14:25:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:39.841 14:25:22 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:39.841 14:25:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:39.841 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:39.841 14:25:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:39.841 14:25:22 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:39.841 14:25:22 -- rpc/rpc.sh@36 -- # jq length 00:05:39.841 14:25:22 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:39.841 00:05:39.841 real 0m0.137s 00:05:39.841 user 0m0.085s 00:05:39.841 sys 0m0.021s 00:05:39.841 14:25:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.841 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:39.841 ************************************ 00:05:39.841 END TEST rpc_plugins 00:05:39.841 ************************************ 00:05:39.841 14:25:22 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:39.841 14:25:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.841 14:25:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.841 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:39.841 ************************************ 00:05:39.841 START TEST rpc_trace_cmd_test 00:05:39.841 ************************************ 00:05:39.841 14:25:22 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:39.841 14:25:22 -- rpc/rpc.sh@40 -- # local info 00:05:39.841 14:25:22 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:39.841 14:25:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:39.841 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:40.099 14:25:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:40.099 14:25:22 -- rpc/rpc.sh@42 -- # info='{ 00:05:40.099 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid681876", 00:05:40.099 "tpoint_group_mask": "0x8", 00:05:40.099 "iscsi_conn": { 00:05:40.099 "mask": "0x2", 00:05:40.099 "tpoint_mask": "0x0" 00:05:40.099 }, 00:05:40.099 "scsi": { 00:05:40.099 "mask": "0x4", 00:05:40.099 "tpoint_mask": "0x0" 00:05:40.099 }, 00:05:40.099 "bdev": { 00:05:40.099 "mask": "0x8", 00:05:40.099 "tpoint_mask": "0xffffffffffffffff" 00:05:40.099 }, 00:05:40.099 "nvmf_rdma": { 00:05:40.099 "mask": "0x10", 00:05:40.099 "tpoint_mask": "0x0" 00:05:40.099 }, 00:05:40.099 "nvmf_tcp": { 00:05:40.099 "mask": "0x20", 00:05:40.099 "tpoint_mask": "0x0" 00:05:40.099 }, 00:05:40.099 "ftl": { 00:05:40.099 "mask": "0x40", 00:05:40.099 "tpoint_mask": "0x0" 00:05:40.099 }, 00:05:40.099 "blobfs": { 00:05:40.099 "mask": "0x80", 00:05:40.099 "tpoint_mask": "0x0" 00:05:40.099 }, 00:05:40.099 "dsa": { 00:05:40.099 "mask": "0x200", 00:05:40.099 "tpoint_mask": "0x0" 00:05:40.099 }, 00:05:40.099 "thread": { 00:05:40.099 "mask": "0x400", 00:05:40.099 "tpoint_mask": "0x0" 00:05:40.099 }, 00:05:40.099 "nvme_pcie": { 00:05:40.099 "mask": "0x800", 00:05:40.099 "tpoint_mask": "0x0" 00:05:40.099 }, 00:05:40.099 "iaa": { 00:05:40.099 "mask": "0x1000", 00:05:40.099 "tpoint_mask": "0x0" 00:05:40.099 }, 00:05:40.099 "nvme_tcp": { 00:05:40.099 "mask": "0x2000", 00:05:40.099 "tpoint_mask": "0x0" 00:05:40.099 }, 00:05:40.099 "bdev_nvme": { 00:05:40.099 "mask": "0x4000", 00:05:40.099 "tpoint_mask": "0x0" 00:05:40.099 } 00:05:40.099 }' 00:05:40.099 14:25:22 -- rpc/rpc.sh@43 -- # jq length 00:05:40.099 14:25:22 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:40.099 14:25:22 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:40.099 14:25:22 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:40.099 14:25:22 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:40.099 14:25:22 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:40.099 14:25:22 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:40.099 14:25:22 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:40.099 14:25:22 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:40.099 14:25:22 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:40.099 00:05:40.099 real 0m0.229s 00:05:40.099 user 0m0.187s 00:05:40.099 sys 0m0.035s 00:05:40.099 14:25:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.099 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:40.099 ************************************ 00:05:40.099 END TEST rpc_trace_cmd_test 00:05:40.099 ************************************ 00:05:40.099 14:25:22 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:40.099 14:25:22 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:40.099 14:25:22 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:40.099 14:25:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:40.099 14:25:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.099 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:40.099 ************************************ 00:05:40.099 START TEST rpc_daemon_integrity 00:05:40.099 ************************************ 00:05:40.099 14:25:22 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:40.099 14:25:22 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:40.099 14:25:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:40.099 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:40.359 14:25:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:40.359 14:25:22 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:40.359 14:25:22 -- rpc/rpc.sh@13 -- # jq length 00:05:40.359 14:25:22 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:40.359 14:25:22 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:40.359 14:25:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:40.359 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:40.359 14:25:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:40.359 14:25:22 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:40.359 14:25:22 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:40.359 14:25:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:40.359 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:40.359 14:25:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:40.359 14:25:22 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:40.359 { 00:05:40.359 "name": "Malloc2", 00:05:40.359 "aliases": [ 00:05:40.359 "ba8506c8-fea7-46e4-8071-755b7d3aabe7" 00:05:40.359 ], 00:05:40.359 "product_name": "Malloc disk", 00:05:40.359 "block_size": 512, 00:05:40.359 "num_blocks": 16384, 00:05:40.359 "uuid": "ba8506c8-fea7-46e4-8071-755b7d3aabe7", 00:05:40.359 "assigned_rate_limits": { 00:05:40.359 "rw_ios_per_sec": 0, 00:05:40.359 "rw_mbytes_per_sec": 0, 00:05:40.359 "r_mbytes_per_sec": 0, 00:05:40.359 "w_mbytes_per_sec": 0 00:05:40.359 }, 00:05:40.359 "claimed": false, 00:05:40.359 "zoned": false, 00:05:40.359 "supported_io_types": { 00:05:40.359 "read": true, 00:05:40.359 "write": true, 00:05:40.359 "unmap": true, 00:05:40.359 "write_zeroes": true, 00:05:40.359 "flush": true, 00:05:40.359 "reset": true, 00:05:40.359 "compare": false, 00:05:40.359 "compare_and_write": false, 00:05:40.359 "abort": true, 00:05:40.359 "nvme_admin": false, 00:05:40.359 "nvme_io": false 00:05:40.359 }, 00:05:40.359 "memory_domains": [ 00:05:40.359 { 00:05:40.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.359 "dma_device_type": 2 00:05:40.359 } 00:05:40.359 ], 00:05:40.359 "driver_specific": {} 00:05:40.359 } 00:05:40.359 ]' 00:05:40.359 14:25:22 -- rpc/rpc.sh@17 -- # jq length 00:05:40.359 14:25:22 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:40.359 14:25:22 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:40.359 14:25:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:40.359 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:40.359 [2024-10-01 14:25:22.756919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:40.359 [2024-10-01 14:25:22.756957] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:40.359 [2024-10-01 14:25:22.756973] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x525d280 00:05:40.359 [2024-10-01 14:25:22.756983] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:40.359 [2024-10-01 14:25:22.757684] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:40.359 [2024-10-01 14:25:22.757707] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:40.359 Passthru0 00:05:40.359 14:25:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:40.359 14:25:22 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:40.359 14:25:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:40.359 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:40.359 14:25:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:40.359 14:25:22 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:40.359 { 00:05:40.359 "name": "Malloc2", 00:05:40.359 "aliases": [ 00:05:40.359 "ba8506c8-fea7-46e4-8071-755b7d3aabe7" 00:05:40.359 ], 00:05:40.359 "product_name": "Malloc disk", 00:05:40.359 "block_size": 512, 00:05:40.359 "num_blocks": 16384, 00:05:40.359 "uuid": "ba8506c8-fea7-46e4-8071-755b7d3aabe7", 00:05:40.359 "assigned_rate_limits": { 00:05:40.359 "rw_ios_per_sec": 0, 00:05:40.359 "rw_mbytes_per_sec": 0, 00:05:40.359 "r_mbytes_per_sec": 0, 00:05:40.359 "w_mbytes_per_sec": 0 00:05:40.359 }, 00:05:40.359 "claimed": true, 00:05:40.359 "claim_type": "exclusive_write", 00:05:40.359 "zoned": false, 00:05:40.359 "supported_io_types": { 00:05:40.359 "read": true, 00:05:40.359 "write": true, 00:05:40.359 "unmap": true, 00:05:40.359 "write_zeroes": true, 00:05:40.359 "flush": true, 00:05:40.359 "reset": true, 00:05:40.359 "compare": false, 00:05:40.359 "compare_and_write": false, 00:05:40.359 "abort": true, 00:05:40.359 "nvme_admin": false, 00:05:40.359 "nvme_io": false 00:05:40.359 }, 00:05:40.359 "memory_domains": [ 00:05:40.359 { 00:05:40.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.359 "dma_device_type": 2 00:05:40.359 } 00:05:40.359 ], 00:05:40.359 "driver_specific": {} 00:05:40.359 }, 00:05:40.359 { 00:05:40.359 "name": "Passthru0", 00:05:40.359 "aliases": [ 00:05:40.359 "0ea74a56-4708-5dd1-95b1-80032a15a847" 00:05:40.359 ], 00:05:40.359 "product_name": "passthru", 00:05:40.359 "block_size": 512, 00:05:40.359 "num_blocks": 16384, 00:05:40.359 "uuid": "0ea74a56-4708-5dd1-95b1-80032a15a847", 00:05:40.359 "assigned_rate_limits": { 00:05:40.359 "rw_ios_per_sec": 0, 00:05:40.359 "rw_mbytes_per_sec": 0, 00:05:40.359 "r_mbytes_per_sec": 0, 00:05:40.359 "w_mbytes_per_sec": 0 00:05:40.359 }, 00:05:40.359 "claimed": false, 00:05:40.359 "zoned": false, 00:05:40.359 "supported_io_types": { 00:05:40.359 "read": true, 00:05:40.359 "write": true, 00:05:40.359 "unmap": true, 00:05:40.359 "write_zeroes": true, 00:05:40.359 "flush": true, 00:05:40.359 "reset": true, 00:05:40.359 "compare": false, 00:05:40.359 "compare_and_write": false, 00:05:40.359 "abort": true, 00:05:40.359 "nvme_admin": false, 00:05:40.359 "nvme_io": false 00:05:40.359 }, 00:05:40.359 "memory_domains": [ 00:05:40.359 { 00:05:40.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.359 "dma_device_type": 2 00:05:40.359 } 00:05:40.359 ], 00:05:40.359 "driver_specific": { 00:05:40.359 "passthru": { 00:05:40.359 "name": "Passthru0", 00:05:40.359 "base_bdev_name": "Malloc2" 00:05:40.359 } 00:05:40.359 } 00:05:40.359 } 00:05:40.359 ]' 00:05:40.359 14:25:22 -- rpc/rpc.sh@21 -- # jq length 00:05:40.359 14:25:22 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:40.359 14:25:22 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:40.359 14:25:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:40.359 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:40.359 14:25:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:40.359 14:25:22 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:40.359 14:25:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:40.359 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:40.359 14:25:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:40.359 14:25:22 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:40.359 14:25:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:40.359 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:40.359 14:25:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:40.359 14:25:22 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:40.359 14:25:22 -- rpc/rpc.sh@26 -- # jq length 00:05:40.619 14:25:22 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:40.619 00:05:40.619 real 0m0.289s 00:05:40.619 user 0m0.175s 00:05:40.619 sys 0m0.052s 00:05:40.619 14:25:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.619 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:40.619 ************************************ 00:05:40.619 END TEST rpc_daemon_integrity 00:05:40.619 ************************************ 00:05:40.619 14:25:22 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:40.619 14:25:22 -- rpc/rpc.sh@84 -- # killprocess 681876 00:05:40.619 14:25:22 -- common/autotest_common.sh@926 -- # '[' -z 681876 ']' 00:05:40.619 14:25:22 -- common/autotest_common.sh@930 -- # kill -0 681876 00:05:40.619 14:25:22 -- common/autotest_common.sh@931 -- # uname 00:05:40.619 14:25:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:40.619 14:25:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 681876 00:05:40.619 14:25:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:40.619 14:25:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:40.619 14:25:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 681876' 00:05:40.619 killing process with pid 681876 00:05:40.619 14:25:23 -- common/autotest_common.sh@945 -- # kill 681876 00:05:40.619 14:25:23 -- common/autotest_common.sh@950 -- # wait 681876 00:05:40.878 00:05:40.878 real 0m2.511s 00:05:40.878 user 0m3.135s 00:05:40.878 sys 0m0.795s 00:05:40.878 14:25:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.878 14:25:23 -- common/autotest_common.sh@10 -- # set +x 00:05:40.878 ************************************ 00:05:40.878 END TEST rpc 00:05:40.878 ************************************ 00:05:40.878 14:25:23 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:40.878 14:25:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:40.878 14:25:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.878 14:25:23 -- common/autotest_common.sh@10 -- # set +x 00:05:41.137 ************************************ 00:05:41.137 START TEST rpc_client 00:05:41.137 ************************************ 00:05:41.137 14:25:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:41.137 * Looking for test storage... 00:05:41.137 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:05:41.137 14:25:23 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:41.137 OK 00:05:41.137 14:25:23 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:41.137 00:05:41.137 real 0m0.125s 00:05:41.137 user 0m0.055s 00:05:41.137 sys 0m0.080s 00:05:41.137 14:25:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.137 14:25:23 -- common/autotest_common.sh@10 -- # set +x 00:05:41.137 ************************************ 00:05:41.137 END TEST rpc_client 00:05:41.137 ************************************ 00:05:41.137 14:25:23 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:41.137 14:25:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:41.137 14:25:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.137 14:25:23 -- common/autotest_common.sh@10 -- # set +x 00:05:41.137 ************************************ 00:05:41.137 START TEST json_config 00:05:41.137 ************************************ 00:05:41.138 14:25:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:41.138 14:25:23 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:41.138 14:25:23 -- nvmf/common.sh@7 -- # uname -s 00:05:41.138 14:25:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.138 14:25:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.138 14:25:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.138 14:25:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.138 14:25:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.138 14:25:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.138 14:25:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.138 14:25:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.138 14:25:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.397 14:25:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.397 14:25:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:05:41.397 14:25:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:05:41.397 14:25:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.397 14:25:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.397 14:25:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.397 14:25:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:41.397 14:25:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.397 14:25:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.397 14:25:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.397 14:25:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.398 14:25:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.398 14:25:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.398 14:25:23 -- paths/export.sh@5 -- # export PATH 00:05:41.398 14:25:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.398 14:25:23 -- nvmf/common.sh@46 -- # : 0 00:05:41.398 14:25:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:41.398 14:25:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:41.398 14:25:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:41.398 14:25:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.398 14:25:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.398 14:25:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:41.398 14:25:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:41.398 14:25:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:41.398 14:25:23 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:41.398 14:25:23 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:41.398 14:25:23 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:41.398 14:25:23 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:41.398 14:25:23 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:41.398 WARNING: No tests are enabled so not running JSON configuration tests 00:05:41.398 14:25:23 -- json_config/json_config.sh@27 -- # exit 0 00:05:41.398 00:05:41.398 real 0m0.100s 00:05:41.398 user 0m0.054s 00:05:41.398 sys 0m0.048s 00:05:41.398 14:25:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.398 14:25:23 -- common/autotest_common.sh@10 -- # set +x 00:05:41.398 ************************************ 00:05:41.398 END TEST json_config 00:05:41.398 ************************************ 00:05:41.398 14:25:23 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:41.398 14:25:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:41.398 14:25:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.398 14:25:23 -- common/autotest_common.sh@10 -- # set +x 00:05:41.398 ************************************ 00:05:41.398 START TEST json_config_extra_key 00:05:41.398 ************************************ 00:05:41.398 14:25:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:41.398 14:25:23 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:41.398 14:25:23 -- nvmf/common.sh@7 -- # uname -s 00:05:41.398 14:25:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.398 14:25:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.398 14:25:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.398 14:25:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.398 14:25:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.398 14:25:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.398 14:25:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.398 14:25:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.398 14:25:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.398 14:25:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.398 14:25:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:05:41.398 14:25:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:05:41.398 14:25:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.398 14:25:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.398 14:25:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.398 14:25:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:41.398 14:25:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.398 14:25:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.398 14:25:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.398 14:25:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.398 14:25:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.398 14:25:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.398 14:25:23 -- paths/export.sh@5 -- # export PATH 00:05:41.398 14:25:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.398 14:25:23 -- nvmf/common.sh@46 -- # : 0 00:05:41.398 14:25:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:41.398 14:25:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:41.398 14:25:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:41.398 14:25:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.398 14:25:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.398 14:25:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:41.398 14:25:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:41.398 14:25:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:41.398 14:25:23 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:41.398 14:25:23 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:41.398 14:25:23 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:41.398 14:25:23 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:41.398 14:25:23 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:41.398 14:25:23 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:41.398 14:25:23 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:41.398 14:25:23 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:41.398 14:25:23 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:41.398 14:25:23 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:41.398 INFO: launching applications... 00:05:41.398 14:25:23 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:41.398 14:25:23 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:41.398 14:25:23 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:41.398 14:25:23 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:41.398 14:25:23 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:41.398 14:25:23 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=682582 00:05:41.398 14:25:23 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:41.398 Waiting for target to run... 00:05:41.398 14:25:23 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 682582 /var/tmp/spdk_tgt.sock 00:05:41.398 14:25:23 -- common/autotest_common.sh@819 -- # '[' -z 682582 ']' 00:05:41.398 14:25:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:41.398 14:25:23 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:41.398 14:25:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:41.399 14:25:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:41.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:41.399 14:25:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:41.399 14:25:23 -- common/autotest_common.sh@10 -- # set +x 00:05:41.399 [2024-10-01 14:25:23.865355] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:41.399 [2024-10-01 14:25:23.865448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid682582 ] 00:05:41.399 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.967 [2024-10-01 14:25:24.429016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.227 [2024-10-01 14:25:24.524911] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:42.227 [2024-10-01 14:25:24.525017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.227 14:25:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:42.227 14:25:24 -- common/autotest_common.sh@852 -- # return 0 00:05:42.227 14:25:24 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:42.227 00:05:42.227 14:25:24 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:42.227 INFO: shutting down applications... 00:05:42.227 14:25:24 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:42.227 14:25:24 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:42.227 14:25:24 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:42.227 14:25:24 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 682582 ]] 00:05:42.227 14:25:24 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 682582 00:05:42.227 14:25:24 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:42.227 14:25:24 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:42.227 14:25:24 -- json_config/json_config_extra_key.sh@50 -- # kill -0 682582 00:05:42.227 14:25:24 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:42.796 14:25:25 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:42.796 14:25:25 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:42.796 14:25:25 -- json_config/json_config_extra_key.sh@50 -- # kill -0 682582 00:05:42.796 14:25:25 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:42.796 14:25:25 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:42.796 14:25:25 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:42.796 14:25:25 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:42.796 SPDK target shutdown done 00:05:42.796 14:25:25 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:42.796 Success 00:05:42.796 00:05:42.796 real 0m1.485s 00:05:42.796 user 0m1.022s 00:05:42.796 sys 0m0.668s 00:05:42.796 14:25:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.796 14:25:25 -- common/autotest_common.sh@10 -- # set +x 00:05:42.796 ************************************ 00:05:42.796 END TEST json_config_extra_key 00:05:42.796 ************************************ 00:05:42.796 14:25:25 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:42.796 14:25:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.796 14:25:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.796 14:25:25 -- common/autotest_common.sh@10 -- # set +x 00:05:42.796 ************************************ 00:05:42.796 START TEST alias_rpc 00:05:42.796 ************************************ 00:05:42.796 14:25:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:43.056 * Looking for test storage... 00:05:43.056 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:05:43.056 14:25:25 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:43.056 14:25:25 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=682829 00:05:43.056 14:25:25 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 682829 00:05:43.056 14:25:25 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.056 14:25:25 -- common/autotest_common.sh@819 -- # '[' -z 682829 ']' 00:05:43.056 14:25:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.056 14:25:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:43.056 14:25:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.056 14:25:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:43.056 14:25:25 -- common/autotest_common.sh@10 -- # set +x 00:05:43.056 [2024-10-01 14:25:25.400829] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:43.056 [2024-10-01 14:25:25.400940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid682829 ] 00:05:43.056 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.056 [2024-10-01 14:25:25.488553] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.056 [2024-10-01 14:25:25.576321] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:43.056 [2024-10-01 14:25:25.576433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.994 14:25:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:43.994 14:25:26 -- common/autotest_common.sh@852 -- # return 0 00:05:43.994 14:25:26 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:43.994 14:25:26 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 682829 00:05:43.994 14:25:26 -- common/autotest_common.sh@926 -- # '[' -z 682829 ']' 00:05:43.994 14:25:26 -- common/autotest_common.sh@930 -- # kill -0 682829 00:05:43.994 14:25:26 -- common/autotest_common.sh@931 -- # uname 00:05:43.994 14:25:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:43.994 14:25:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 682829 00:05:43.994 14:25:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:43.994 14:25:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:43.994 14:25:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 682829' 00:05:43.994 killing process with pid 682829 00:05:43.994 14:25:26 -- common/autotest_common.sh@945 -- # kill 682829 00:05:43.994 14:25:26 -- common/autotest_common.sh@950 -- # wait 682829 00:05:44.563 00:05:44.563 real 0m1.580s 00:05:44.563 user 0m1.668s 00:05:44.563 sys 0m0.489s 00:05:44.563 14:25:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.563 14:25:26 -- common/autotest_common.sh@10 -- # set +x 00:05:44.563 ************************************ 00:05:44.563 END TEST alias_rpc 00:05:44.563 ************************************ 00:05:44.563 14:25:26 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:44.563 14:25:26 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:44.563 14:25:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.563 14:25:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.563 14:25:26 -- common/autotest_common.sh@10 -- # set +x 00:05:44.563 ************************************ 00:05:44.563 START TEST spdkcli_tcp 00:05:44.563 ************************************ 00:05:44.563 14:25:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:44.563 * Looking for test storage... 00:05:44.563 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:05:44.563 14:25:27 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:05:44.563 14:25:27 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:44.563 14:25:27 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:05:44.563 14:25:27 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:44.563 14:25:27 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:44.563 14:25:27 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:44.563 14:25:27 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:44.563 14:25:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:44.563 14:25:27 -- common/autotest_common.sh@10 -- # set +x 00:05:44.563 14:25:27 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=683066 00:05:44.563 14:25:27 -- spdkcli/tcp.sh@27 -- # waitforlisten 683066 00:05:44.563 14:25:27 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:44.563 14:25:27 -- common/autotest_common.sh@819 -- # '[' -z 683066 ']' 00:05:44.563 14:25:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.563 14:25:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:44.563 14:25:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.563 14:25:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:44.563 14:25:27 -- common/autotest_common.sh@10 -- # set +x 00:05:44.563 [2024-10-01 14:25:27.036517] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:44.563 [2024-10-01 14:25:27.036597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid683066 ] 00:05:44.563 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.824 [2024-10-01 14:25:27.123467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.824 [2024-10-01 14:25:27.202703] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:44.824 [2024-10-01 14:25:27.202891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.824 [2024-10-01 14:25:27.202891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.393 14:25:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:45.393 14:25:27 -- common/autotest_common.sh@852 -- # return 0 00:05:45.393 14:25:27 -- spdkcli/tcp.sh@31 -- # socat_pid=683221 00:05:45.393 14:25:27 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:45.393 14:25:27 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:45.652 [ 00:05:45.652 "spdk_get_version", 00:05:45.652 "rpc_get_methods", 00:05:45.652 "trace_get_info", 00:05:45.652 "trace_get_tpoint_group_mask", 00:05:45.652 "trace_disable_tpoint_group", 00:05:45.652 "trace_enable_tpoint_group", 00:05:45.652 "trace_clear_tpoint_mask", 00:05:45.652 "trace_set_tpoint_mask", 00:05:45.652 "vfu_tgt_set_base_path", 00:05:45.652 "framework_get_pci_devices", 00:05:45.652 "framework_get_config", 00:05:45.652 "framework_get_subsystems", 00:05:45.652 "iobuf_get_stats", 00:05:45.652 "iobuf_set_options", 00:05:45.652 "sock_set_default_impl", 00:05:45.652 "sock_impl_set_options", 00:05:45.652 "sock_impl_get_options", 00:05:45.652 "vmd_rescan", 00:05:45.652 "vmd_remove_device", 00:05:45.652 "vmd_enable", 00:05:45.652 "accel_get_stats", 00:05:45.652 "accel_set_options", 00:05:45.652 "accel_set_driver", 00:05:45.652 "accel_crypto_key_destroy", 00:05:45.652 "accel_crypto_keys_get", 00:05:45.652 "accel_crypto_key_create", 00:05:45.652 "accel_assign_opc", 00:05:45.652 "accel_get_module_info", 00:05:45.652 "accel_get_opc_assignments", 00:05:45.652 "notify_get_notifications", 00:05:45.652 "notify_get_types", 00:05:45.652 "bdev_get_histogram", 00:05:45.652 "bdev_enable_histogram", 00:05:45.652 "bdev_set_qos_limit", 00:05:45.652 "bdev_set_qd_sampling_period", 00:05:45.652 "bdev_get_bdevs", 00:05:45.652 "bdev_reset_iostat", 00:05:45.652 "bdev_get_iostat", 00:05:45.652 "bdev_examine", 00:05:45.652 "bdev_wait_for_examine", 00:05:45.652 "bdev_set_options", 00:05:45.653 "scsi_get_devices", 00:05:45.653 "thread_set_cpumask", 00:05:45.653 "framework_get_scheduler", 00:05:45.653 "framework_set_scheduler", 00:05:45.653 "framework_get_reactors", 00:05:45.653 "thread_get_io_channels", 00:05:45.653 "thread_get_pollers", 00:05:45.653 "thread_get_stats", 00:05:45.653 "framework_monitor_context_switch", 00:05:45.653 "spdk_kill_instance", 00:05:45.653 "log_enable_timestamps", 00:05:45.653 "log_get_flags", 00:05:45.653 "log_clear_flag", 00:05:45.653 "log_set_flag", 00:05:45.653 "log_get_level", 00:05:45.653 "log_set_level", 00:05:45.653 "log_get_print_level", 00:05:45.653 "log_set_print_level", 00:05:45.653 "framework_enable_cpumask_locks", 00:05:45.653 "framework_disable_cpumask_locks", 00:05:45.653 "framework_wait_init", 00:05:45.653 "framework_start_init", 00:05:45.653 "virtio_blk_create_transport", 00:05:45.653 "virtio_blk_get_transports", 00:05:45.653 "vhost_controller_set_coalescing", 00:05:45.653 "vhost_get_controllers", 00:05:45.653 "vhost_delete_controller", 00:05:45.653 "vhost_create_blk_controller", 00:05:45.653 "vhost_scsi_controller_remove_target", 00:05:45.653 "vhost_scsi_controller_add_target", 00:05:45.653 "vhost_start_scsi_controller", 00:05:45.653 "vhost_create_scsi_controller", 00:05:45.653 "ublk_recover_disk", 00:05:45.653 "ublk_get_disks", 00:05:45.653 "ublk_stop_disk", 00:05:45.653 "ublk_start_disk", 00:05:45.653 "ublk_destroy_target", 00:05:45.653 "ublk_create_target", 00:05:45.653 "nbd_get_disks", 00:05:45.653 "nbd_stop_disk", 00:05:45.653 "nbd_start_disk", 00:05:45.653 "env_dpdk_get_mem_stats", 00:05:45.653 "nvmf_subsystem_get_listeners", 00:05:45.653 "nvmf_subsystem_get_qpairs", 00:05:45.653 "nvmf_subsystem_get_controllers", 00:05:45.653 "nvmf_get_stats", 00:05:45.653 "nvmf_get_transports", 00:05:45.653 "nvmf_create_transport", 00:05:45.653 "nvmf_get_targets", 00:05:45.653 "nvmf_delete_target", 00:05:45.653 "nvmf_create_target", 00:05:45.653 "nvmf_subsystem_allow_any_host", 00:05:45.653 "nvmf_subsystem_remove_host", 00:05:45.653 "nvmf_subsystem_add_host", 00:05:45.653 "nvmf_subsystem_remove_ns", 00:05:45.653 "nvmf_subsystem_add_ns", 00:05:45.653 "nvmf_subsystem_listener_set_ana_state", 00:05:45.653 "nvmf_discovery_get_referrals", 00:05:45.653 "nvmf_discovery_remove_referral", 00:05:45.653 "nvmf_discovery_add_referral", 00:05:45.653 "nvmf_subsystem_remove_listener", 00:05:45.653 "nvmf_subsystem_add_listener", 00:05:45.653 "nvmf_delete_subsystem", 00:05:45.653 "nvmf_create_subsystem", 00:05:45.653 "nvmf_get_subsystems", 00:05:45.653 "nvmf_set_crdt", 00:05:45.653 "nvmf_set_config", 00:05:45.653 "nvmf_set_max_subsystems", 00:05:45.653 "iscsi_set_options", 00:05:45.653 "iscsi_get_auth_groups", 00:05:45.653 "iscsi_auth_group_remove_secret", 00:05:45.653 "iscsi_auth_group_add_secret", 00:05:45.653 "iscsi_delete_auth_group", 00:05:45.653 "iscsi_create_auth_group", 00:05:45.653 "iscsi_set_discovery_auth", 00:05:45.653 "iscsi_get_options", 00:05:45.653 "iscsi_target_node_request_logout", 00:05:45.653 "iscsi_target_node_set_redirect", 00:05:45.653 "iscsi_target_node_set_auth", 00:05:45.653 "iscsi_target_node_add_lun", 00:05:45.653 "iscsi_get_connections", 00:05:45.653 "iscsi_portal_group_set_auth", 00:05:45.653 "iscsi_start_portal_group", 00:05:45.653 "iscsi_delete_portal_group", 00:05:45.653 "iscsi_create_portal_group", 00:05:45.653 "iscsi_get_portal_groups", 00:05:45.653 "iscsi_delete_target_node", 00:05:45.653 "iscsi_target_node_remove_pg_ig_maps", 00:05:45.653 "iscsi_target_node_add_pg_ig_maps", 00:05:45.653 "iscsi_create_target_node", 00:05:45.653 "iscsi_get_target_nodes", 00:05:45.653 "iscsi_delete_initiator_group", 00:05:45.653 "iscsi_initiator_group_remove_initiators", 00:05:45.653 "iscsi_initiator_group_add_initiators", 00:05:45.653 "iscsi_create_initiator_group", 00:05:45.653 "iscsi_get_initiator_groups", 00:05:45.653 "vfu_virtio_create_scsi_endpoint", 00:05:45.653 "vfu_virtio_scsi_remove_target", 00:05:45.653 "vfu_virtio_scsi_add_target", 00:05:45.653 "vfu_virtio_create_blk_endpoint", 00:05:45.653 "vfu_virtio_delete_endpoint", 00:05:45.653 "iaa_scan_accel_module", 00:05:45.653 "dsa_scan_accel_module", 00:05:45.653 "ioat_scan_accel_module", 00:05:45.653 "accel_error_inject_error", 00:05:45.653 "bdev_iscsi_delete", 00:05:45.653 "bdev_iscsi_create", 00:05:45.653 "bdev_iscsi_set_options", 00:05:45.653 "bdev_virtio_attach_controller", 00:05:45.653 "bdev_virtio_scsi_get_devices", 00:05:45.653 "bdev_virtio_detach_controller", 00:05:45.653 "bdev_virtio_blk_set_hotplug", 00:05:45.653 "bdev_ftl_set_property", 00:05:45.653 "bdev_ftl_get_properties", 00:05:45.653 "bdev_ftl_get_stats", 00:05:45.653 "bdev_ftl_unmap", 00:05:45.653 "bdev_ftl_unload", 00:05:45.653 "bdev_ftl_delete", 00:05:45.653 "bdev_ftl_load", 00:05:45.653 "bdev_ftl_create", 00:05:45.653 "bdev_aio_delete", 00:05:45.653 "bdev_aio_rescan", 00:05:45.653 "bdev_aio_create", 00:05:45.653 "blobfs_create", 00:05:45.653 "blobfs_detect", 00:05:45.653 "blobfs_set_cache_size", 00:05:45.653 "bdev_zone_block_delete", 00:05:45.653 "bdev_zone_block_create", 00:05:45.653 "bdev_delay_delete", 00:05:45.653 "bdev_delay_create", 00:05:45.653 "bdev_delay_update_latency", 00:05:45.653 "bdev_split_delete", 00:05:45.653 "bdev_split_create", 00:05:45.653 "bdev_error_inject_error", 00:05:45.653 "bdev_error_delete", 00:05:45.653 "bdev_error_create", 00:05:45.653 "bdev_raid_set_options", 00:05:45.653 "bdev_raid_remove_base_bdev", 00:05:45.653 "bdev_raid_add_base_bdev", 00:05:45.653 "bdev_raid_delete", 00:05:45.653 "bdev_raid_create", 00:05:45.653 "bdev_raid_get_bdevs", 00:05:45.653 "bdev_lvol_grow_lvstore", 00:05:45.653 "bdev_lvol_get_lvols", 00:05:45.653 "bdev_lvol_get_lvstores", 00:05:45.653 "bdev_lvol_delete", 00:05:45.653 "bdev_lvol_set_read_only", 00:05:45.653 "bdev_lvol_resize", 00:05:45.653 "bdev_lvol_decouple_parent", 00:05:45.653 "bdev_lvol_inflate", 00:05:45.653 "bdev_lvol_rename", 00:05:45.653 "bdev_lvol_clone_bdev", 00:05:45.653 "bdev_lvol_clone", 00:05:45.653 "bdev_lvol_snapshot", 00:05:45.653 "bdev_lvol_create", 00:05:45.653 "bdev_lvol_delete_lvstore", 00:05:45.653 "bdev_lvol_rename_lvstore", 00:05:45.653 "bdev_lvol_create_lvstore", 00:05:45.653 "bdev_passthru_delete", 00:05:45.653 "bdev_passthru_create", 00:05:45.653 "bdev_nvme_cuse_unregister", 00:05:45.653 "bdev_nvme_cuse_register", 00:05:45.653 "bdev_opal_new_user", 00:05:45.653 "bdev_opal_set_lock_state", 00:05:45.653 "bdev_opal_delete", 00:05:45.653 "bdev_opal_get_info", 00:05:45.653 "bdev_opal_create", 00:05:45.653 "bdev_nvme_opal_revert", 00:05:45.653 "bdev_nvme_opal_init", 00:05:45.653 "bdev_nvme_send_cmd", 00:05:45.653 "bdev_nvme_get_path_iostat", 00:05:45.653 "bdev_nvme_get_mdns_discovery_info", 00:05:45.653 "bdev_nvme_stop_mdns_discovery", 00:05:45.653 "bdev_nvme_start_mdns_discovery", 00:05:45.653 "bdev_nvme_set_multipath_policy", 00:05:45.653 "bdev_nvme_set_preferred_path", 00:05:45.653 "bdev_nvme_get_io_paths", 00:05:45.653 "bdev_nvme_remove_error_injection", 00:05:45.653 "bdev_nvme_add_error_injection", 00:05:45.653 "bdev_nvme_get_discovery_info", 00:05:45.653 "bdev_nvme_stop_discovery", 00:05:45.653 "bdev_nvme_start_discovery", 00:05:45.653 "bdev_nvme_get_controller_health_info", 00:05:45.653 "bdev_nvme_disable_controller", 00:05:45.653 "bdev_nvme_enable_controller", 00:05:45.653 "bdev_nvme_reset_controller", 00:05:45.653 "bdev_nvme_get_transport_statistics", 00:05:45.653 "bdev_nvme_apply_firmware", 00:05:45.653 "bdev_nvme_detach_controller", 00:05:45.653 "bdev_nvme_get_controllers", 00:05:45.653 "bdev_nvme_attach_controller", 00:05:45.653 "bdev_nvme_set_hotplug", 00:05:45.653 "bdev_nvme_set_options", 00:05:45.653 "bdev_null_resize", 00:05:45.653 "bdev_null_delete", 00:05:45.653 "bdev_null_create", 00:05:45.653 "bdev_malloc_delete", 00:05:45.653 "bdev_malloc_create" 00:05:45.653 ] 00:05:45.653 14:25:28 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:45.653 14:25:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:45.653 14:25:28 -- common/autotest_common.sh@10 -- # set +x 00:05:45.653 14:25:28 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:45.653 14:25:28 -- spdkcli/tcp.sh@38 -- # killprocess 683066 00:05:45.653 14:25:28 -- common/autotest_common.sh@926 -- # '[' -z 683066 ']' 00:05:45.653 14:25:28 -- common/autotest_common.sh@930 -- # kill -0 683066 00:05:45.653 14:25:28 -- common/autotest_common.sh@931 -- # uname 00:05:45.653 14:25:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:45.653 14:25:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 683066 00:05:45.912 14:25:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:45.912 14:25:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:45.912 14:25:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 683066' 00:05:45.912 killing process with pid 683066 00:05:45.912 14:25:28 -- common/autotest_common.sh@945 -- # kill 683066 00:05:45.912 14:25:28 -- common/autotest_common.sh@950 -- # wait 683066 00:05:46.172 00:05:46.172 real 0m1.602s 00:05:46.172 user 0m2.948s 00:05:46.172 sys 0m0.537s 00:05:46.172 14:25:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.172 14:25:28 -- common/autotest_common.sh@10 -- # set +x 00:05:46.172 ************************************ 00:05:46.172 END TEST spdkcli_tcp 00:05:46.172 ************************************ 00:05:46.172 14:25:28 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:46.172 14:25:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.172 14:25:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.172 14:25:28 -- common/autotest_common.sh@10 -- # set +x 00:05:46.172 ************************************ 00:05:46.172 START TEST dpdk_mem_utility 00:05:46.172 ************************************ 00:05:46.172 14:25:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:46.172 * Looking for test storage... 00:05:46.172 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:05:46.172 14:25:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:46.172 14:25:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=683316 00:05:46.172 14:25:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 683316 00:05:46.172 14:25:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.172 14:25:28 -- common/autotest_common.sh@819 -- # '[' -z 683316 ']' 00:05:46.172 14:25:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.172 14:25:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:46.172 14:25:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.172 14:25:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:46.172 14:25:28 -- common/autotest_common.sh@10 -- # set +x 00:05:46.172 [2024-10-01 14:25:28.681639] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:46.172 [2024-10-01 14:25:28.681716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid683316 ] 00:05:46.432 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.432 [2024-10-01 14:25:28.751802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.432 [2024-10-01 14:25:28.840932] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.432 [2024-10-01 14:25:28.841064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.371 14:25:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:47.371 14:25:29 -- common/autotest_common.sh@852 -- # return 0 00:05:47.371 14:25:29 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:47.371 14:25:29 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:47.371 14:25:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:47.371 14:25:29 -- common/autotest_common.sh@10 -- # set +x 00:05:47.371 { 00:05:47.371 "filename": "/tmp/spdk_mem_dump.txt" 00:05:47.371 } 00:05:47.371 14:25:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:47.371 14:25:29 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:47.371 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:47.371 1 heaps totaling size 814.000000 MiB 00:05:47.371 size: 814.000000 MiB heap id: 0 00:05:47.371 end heaps---------- 00:05:47.371 8 mempools totaling size 598.116089 MiB 00:05:47.371 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:47.371 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:47.371 size: 84.521057 MiB name: bdev_io_683316 00:05:47.371 size: 51.011292 MiB name: evtpool_683316 00:05:47.371 size: 50.003479 MiB name: msgpool_683316 00:05:47.371 size: 21.763794 MiB name: PDU_Pool 00:05:47.371 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:47.371 size: 0.026123 MiB name: Session_Pool 00:05:47.371 end mempools------- 00:05:47.371 6 memzones totaling size 4.142822 MiB 00:05:47.371 size: 1.000366 MiB name: RG_ring_0_683316 00:05:47.371 size: 1.000366 MiB name: RG_ring_1_683316 00:05:47.371 size: 1.000366 MiB name: RG_ring_4_683316 00:05:47.371 size: 1.000366 MiB name: RG_ring_5_683316 00:05:47.371 size: 0.125366 MiB name: RG_ring_2_683316 00:05:47.371 size: 0.015991 MiB name: RG_ring_3_683316 00:05:47.371 end memzones------- 00:05:47.371 14:25:29 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:47.371 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:47.371 list of free elements. size: 12.519348 MiB 00:05:47.371 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:47.371 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:47.371 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:47.371 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:47.371 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:47.371 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:47.371 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:47.371 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:47.371 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:47.371 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:47.371 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:47.371 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:47.371 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:47.371 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:47.371 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:47.371 list of standard malloc elements. size: 199.218079 MiB 00:05:47.371 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:47.371 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:47.371 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:47.371 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:47.371 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:47.371 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:47.371 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:47.371 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:47.371 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:47.371 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:47.371 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:47.371 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:47.371 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:47.371 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:47.371 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:47.371 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:47.371 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:47.371 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:47.371 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:47.371 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:47.371 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:47.371 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:47.371 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:47.371 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:47.371 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:47.371 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:47.371 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:47.371 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:47.371 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:47.371 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:47.371 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:47.371 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:47.371 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:47.371 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:47.371 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:47.371 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:47.371 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:47.371 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:47.371 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:47.371 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:47.371 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:47.371 list of memzone associated elements. size: 602.262573 MiB 00:05:47.371 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:47.371 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:47.371 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:47.371 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:47.371 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:47.371 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_683316_0 00:05:47.371 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:47.371 associated memzone info: size: 48.002930 MiB name: MP_evtpool_683316_0 00:05:47.371 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:47.371 associated memzone info: size: 48.002930 MiB name: MP_msgpool_683316_0 00:05:47.371 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:47.371 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:47.371 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:47.371 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:47.371 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:47.371 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_683316 00:05:47.371 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:47.371 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_683316 00:05:47.371 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:47.371 associated memzone info: size: 1.007996 MiB name: MP_evtpool_683316 00:05:47.371 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:47.371 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:47.371 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:47.371 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:47.371 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:47.371 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:47.371 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:47.371 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:47.371 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:47.371 associated memzone info: size: 1.000366 MiB name: RG_ring_0_683316 00:05:47.371 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:47.371 associated memzone info: size: 1.000366 MiB name: RG_ring_1_683316 00:05:47.371 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:47.371 associated memzone info: size: 1.000366 MiB name: RG_ring_4_683316 00:05:47.371 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:47.371 associated memzone info: size: 1.000366 MiB name: RG_ring_5_683316 00:05:47.371 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:47.371 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_683316 00:05:47.371 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:47.371 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:47.371 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:47.371 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:47.371 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:47.372 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:47.372 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:47.372 associated memzone info: size: 0.125366 MiB name: RG_ring_2_683316 00:05:47.372 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:47.372 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:47.372 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:47.372 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:47.372 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:47.372 associated memzone info: size: 0.015991 MiB name: RG_ring_3_683316 00:05:47.372 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:47.372 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:47.372 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:47.372 associated memzone info: size: 0.000183 MiB name: MP_msgpool_683316 00:05:47.372 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:47.372 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_683316 00:05:47.372 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:47.372 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:47.372 14:25:29 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:47.372 14:25:29 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 683316 00:05:47.372 14:25:29 -- common/autotest_common.sh@926 -- # '[' -z 683316 ']' 00:05:47.372 14:25:29 -- common/autotest_common.sh@930 -- # kill -0 683316 00:05:47.372 14:25:29 -- common/autotest_common.sh@931 -- # uname 00:05:47.372 14:25:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:47.372 14:25:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 683316 00:05:47.372 14:25:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:47.372 14:25:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:47.372 14:25:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 683316' 00:05:47.372 killing process with pid 683316 00:05:47.372 14:25:29 -- common/autotest_common.sh@945 -- # kill 683316 00:05:47.372 14:25:29 -- common/autotest_common.sh@950 -- # wait 683316 00:05:47.632 00:05:47.632 real 0m1.526s 00:05:47.632 user 0m1.614s 00:05:47.632 sys 0m0.464s 00:05:47.632 14:25:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.632 14:25:30 -- common/autotest_common.sh@10 -- # set +x 00:05:47.632 ************************************ 00:05:47.632 END TEST dpdk_mem_utility 00:05:47.632 ************************************ 00:05:47.632 14:25:30 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:47.632 14:25:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:47.632 14:25:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.632 14:25:30 -- common/autotest_common.sh@10 -- # set +x 00:05:47.632 ************************************ 00:05:47.632 START TEST event 00:05:47.632 ************************************ 00:05:47.632 14:25:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:47.892 * Looking for test storage... 00:05:47.892 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:47.892 14:25:30 -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:47.892 14:25:30 -- bdev/nbd_common.sh@6 -- # set -e 00:05:47.892 14:25:30 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:47.892 14:25:30 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:47.892 14:25:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.892 14:25:30 -- common/autotest_common.sh@10 -- # set +x 00:05:47.892 ************************************ 00:05:47.892 START TEST event_perf 00:05:47.892 ************************************ 00:05:47.892 14:25:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:47.892 Running I/O for 1 seconds...[2024-10-01 14:25:30.263392] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:47.892 [2024-10-01 14:25:30.263489] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid683550 ] 00:05:47.892 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.892 [2024-10-01 14:25:30.357630] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:48.151 [2024-10-01 14:25:30.445767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.151 [2024-10-01 14:25:30.445822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.151 [2024-10-01 14:25:30.445858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.151 [2024-10-01 14:25:30.445859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.089 Running I/O for 1 seconds... 00:05:49.089 lcore 0: 186144 00:05:49.089 lcore 1: 186144 00:05:49.089 lcore 2: 186147 00:05:49.089 lcore 3: 186145 00:05:49.089 done. 00:05:49.089 00:05:49.089 real 0m1.279s 00:05:49.089 user 0m4.159s 00:05:49.089 sys 0m0.115s 00:05:49.089 14:25:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.089 14:25:31 -- common/autotest_common.sh@10 -- # set +x 00:05:49.089 ************************************ 00:05:49.089 END TEST event_perf 00:05:49.089 ************************************ 00:05:49.089 14:25:31 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:49.089 14:25:31 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:49.089 14:25:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.089 14:25:31 -- common/autotest_common.sh@10 -- # set +x 00:05:49.089 ************************************ 00:05:49.089 START TEST event_reactor 00:05:49.089 ************************************ 00:05:49.089 14:25:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:49.089 [2024-10-01 14:25:31.593077] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:49.089 [2024-10-01 14:25:31.593204] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid683753 ] 00:05:49.349 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.349 [2024-10-01 14:25:31.685520] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.349 [2024-10-01 14:25:31.775034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.729 test_start 00:05:50.729 oneshot 00:05:50.729 tick 100 00:05:50.729 tick 100 00:05:50.729 tick 250 00:05:50.729 tick 100 00:05:50.729 tick 100 00:05:50.729 tick 100 00:05:50.729 tick 250 00:05:50.729 tick 500 00:05:50.729 tick 100 00:05:50.729 tick 100 00:05:50.729 tick 250 00:05:50.729 tick 100 00:05:50.729 tick 100 00:05:50.729 test_end 00:05:50.729 00:05:50.729 real 0m1.277s 00:05:50.729 user 0m1.160s 00:05:50.729 sys 0m0.111s 00:05:50.729 14:25:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.729 14:25:32 -- common/autotest_common.sh@10 -- # set +x 00:05:50.729 ************************************ 00:05:50.729 END TEST event_reactor 00:05:50.729 ************************************ 00:05:50.729 14:25:32 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:50.729 14:25:32 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:50.729 14:25:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:50.729 14:25:32 -- common/autotest_common.sh@10 -- # set +x 00:05:50.729 ************************************ 00:05:50.729 START TEST event_reactor_perf 00:05:50.729 ************************************ 00:05:50.729 14:25:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:50.729 [2024-10-01 14:25:32.919920] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:50.729 [2024-10-01 14:25:32.920016] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid683949 ] 00:05:50.729 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.729 [2024-10-01 14:25:33.013231] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.729 [2024-10-01 14:25:33.104275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.670 test_start 00:05:51.670 test_end 00:05:51.670 Performance: 941007 events per second 00:05:51.670 00:05:51.670 real 0m1.278s 00:05:51.670 user 0m1.162s 00:05:51.670 sys 0m0.110s 00:05:51.670 14:25:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.670 14:25:34 -- common/autotest_common.sh@10 -- # set +x 00:05:51.670 ************************************ 00:05:51.670 END TEST event_reactor_perf 00:05:51.670 ************************************ 00:05:51.929 14:25:34 -- event/event.sh@49 -- # uname -s 00:05:51.929 14:25:34 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:51.929 14:25:34 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:51.929 14:25:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:51.929 14:25:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:51.929 14:25:34 -- common/autotest_common.sh@10 -- # set +x 00:05:51.929 ************************************ 00:05:51.929 START TEST event_scheduler 00:05:51.929 ************************************ 00:05:51.929 14:25:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:51.929 * Looking for test storage... 00:05:51.929 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:05:51.929 14:25:34 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:51.929 14:25:34 -- scheduler/scheduler.sh@35 -- # scheduler_pid=684176 00:05:51.929 14:25:34 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:51.929 14:25:34 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:51.929 14:25:34 -- scheduler/scheduler.sh@37 -- # waitforlisten 684176 00:05:51.929 14:25:34 -- common/autotest_common.sh@819 -- # '[' -z 684176 ']' 00:05:51.929 14:25:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.929 14:25:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:51.929 14:25:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.929 14:25:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:51.929 14:25:34 -- common/autotest_common.sh@10 -- # set +x 00:05:51.929 [2024-10-01 14:25:34.363558] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:51.929 [2024-10-01 14:25:34.363662] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid684176 ] 00:05:51.929 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.188 [2024-10-01 14:25:34.456013] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:52.188 [2024-10-01 14:25:34.537156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.188 [2024-10-01 14:25:34.537258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.188 [2024-10-01 14:25:34.537357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.188 [2024-10-01 14:25:34.537358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.757 14:25:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:52.757 14:25:35 -- common/autotest_common.sh@852 -- # return 0 00:05:52.757 14:25:35 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:52.757 14:25:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.757 14:25:35 -- common/autotest_common.sh@10 -- # set +x 00:05:52.757 POWER: Env isn't set yet! 00:05:52.757 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:52.757 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:52.757 POWER: Cannot set governor of lcore 0 to userspace 00:05:52.757 POWER: Attempting to initialise PSTAT power management... 00:05:52.757 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:52.757 POWER: Initialized successfully for lcore 0 power management 00:05:52.757 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:52.757 POWER: Initialized successfully for lcore 1 power management 00:05:52.757 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:52.757 POWER: Initialized successfully for lcore 2 power management 00:05:52.757 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:52.757 POWER: Initialized successfully for lcore 3 power management 00:05:52.757 [2024-10-01 14:25:35.259943] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:52.757 [2024-10-01 14:25:35.259961] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:52.757 [2024-10-01 14:25:35.259973] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:52.757 14:25:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.757 14:25:35 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:52.757 14:25:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.757 14:25:35 -- common/autotest_common.sh@10 -- # set +x 00:05:53.016 [2024-10-01 14:25:35.334371] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:53.016 14:25:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:53.016 14:25:35 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:53.016 14:25:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:53.016 14:25:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:53.016 14:25:35 -- common/autotest_common.sh@10 -- # set +x 00:05:53.016 ************************************ 00:05:53.016 START TEST scheduler_create_thread 00:05:53.016 ************************************ 00:05:53.016 14:25:35 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:53.016 14:25:35 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:53.016 14:25:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:53.016 14:25:35 -- common/autotest_common.sh@10 -- # set +x 00:05:53.016 2 00:05:53.016 14:25:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:53.016 14:25:35 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:53.016 14:25:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:53.016 14:25:35 -- common/autotest_common.sh@10 -- # set +x 00:05:53.016 3 00:05:53.016 14:25:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:53.016 14:25:35 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:53.016 14:25:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:53.016 14:25:35 -- common/autotest_common.sh@10 -- # set +x 00:05:53.016 4 00:05:53.016 14:25:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:53.016 14:25:35 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:53.016 14:25:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:53.016 14:25:35 -- common/autotest_common.sh@10 -- # set +x 00:05:53.016 5 00:05:53.016 14:25:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:53.016 14:25:35 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:53.016 14:25:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:53.016 14:25:35 -- common/autotest_common.sh@10 -- # set +x 00:05:53.016 6 00:05:53.016 14:25:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:53.016 14:25:35 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:53.016 14:25:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:53.016 14:25:35 -- common/autotest_common.sh@10 -- # set +x 00:05:53.016 7 00:05:53.016 14:25:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:53.016 14:25:35 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:53.016 14:25:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:53.016 14:25:35 -- common/autotest_common.sh@10 -- # set +x 00:05:53.016 8 00:05:53.016 14:25:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:53.016 14:25:35 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:53.016 14:25:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:53.016 14:25:35 -- common/autotest_common.sh@10 -- # set +x 00:05:53.016 9 00:05:53.016 14:25:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:53.016 14:25:35 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:53.016 14:25:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:53.016 14:25:35 -- common/autotest_common.sh@10 -- # set +x 00:05:53.016 10 00:05:53.016 14:25:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:53.016 14:25:35 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:53.016 14:25:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:53.016 14:25:35 -- common/autotest_common.sh@10 -- # set +x 00:05:53.016 14:25:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:53.016 14:25:35 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:53.016 14:25:35 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:53.016 14:25:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:53.016 14:25:35 -- common/autotest_common.sh@10 -- # set +x 00:05:53.951 14:25:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:53.951 14:25:36 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:53.951 14:25:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:53.951 14:25:36 -- common/autotest_common.sh@10 -- # set +x 00:05:55.326 14:25:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:55.326 14:25:37 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:55.326 14:25:37 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:55.326 14:25:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:55.326 14:25:37 -- common/autotest_common.sh@10 -- # set +x 00:05:56.257 14:25:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:56.257 00:05:56.257 real 0m3.382s 00:05:56.257 user 0m0.022s 00:05:56.257 sys 0m0.008s 00:05:56.257 14:25:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.257 14:25:38 -- common/autotest_common.sh@10 -- # set +x 00:05:56.257 ************************************ 00:05:56.257 END TEST scheduler_create_thread 00:05:56.257 ************************************ 00:05:56.257 14:25:38 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:56.257 14:25:38 -- scheduler/scheduler.sh@46 -- # killprocess 684176 00:05:56.257 14:25:38 -- common/autotest_common.sh@926 -- # '[' -z 684176 ']' 00:05:56.257 14:25:38 -- common/autotest_common.sh@930 -- # kill -0 684176 00:05:56.257 14:25:38 -- common/autotest_common.sh@931 -- # uname 00:05:56.257 14:25:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:56.257 14:25:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 684176 00:05:56.514 14:25:38 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:56.514 14:25:38 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:56.514 14:25:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 684176' 00:05:56.514 killing process with pid 684176 00:05:56.514 14:25:38 -- common/autotest_common.sh@945 -- # kill 684176 00:05:56.514 14:25:38 -- common/autotest_common.sh@950 -- # wait 684176 00:05:56.770 [2024-10-01 14:25:39.106415] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:56.770 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:56.770 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:56.770 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:56.770 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:56.770 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:56.770 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:56.770 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:56.770 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:57.027 00:05:57.027 real 0m5.110s 00:05:57.027 user 0m10.533s 00:05:57.027 sys 0m0.422s 00:05:57.027 14:25:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.027 14:25:39 -- common/autotest_common.sh@10 -- # set +x 00:05:57.027 ************************************ 00:05:57.027 END TEST event_scheduler 00:05:57.027 ************************************ 00:05:57.027 14:25:39 -- event/event.sh@51 -- # modprobe -n nbd 00:05:57.027 14:25:39 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:57.027 14:25:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:57.027 14:25:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.027 14:25:39 -- common/autotest_common.sh@10 -- # set +x 00:05:57.027 ************************************ 00:05:57.027 START TEST app_repeat 00:05:57.027 ************************************ 00:05:57.027 14:25:39 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:05:57.027 14:25:39 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.027 14:25:39 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.027 14:25:39 -- event/event.sh@13 -- # local nbd_list 00:05:57.027 14:25:39 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.027 14:25:39 -- event/event.sh@14 -- # local bdev_list 00:05:57.027 14:25:39 -- event/event.sh@15 -- # local repeat_times=4 00:05:57.027 14:25:39 -- event/event.sh@17 -- # modprobe nbd 00:05:57.027 14:25:39 -- event/event.sh@19 -- # repeat_pid=684936 00:05:57.027 14:25:39 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.027 14:25:39 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 684936' 00:05:57.027 Process app_repeat pid: 684936 00:05:57.027 14:25:39 -- event/event.sh@23 -- # for i in {0..2} 00:05:57.027 14:25:39 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:57.027 spdk_app_start Round 0 00:05:57.027 14:25:39 -- event/event.sh@25 -- # waitforlisten 684936 /var/tmp/spdk-nbd.sock 00:05:57.027 14:25:39 -- common/autotest_common.sh@819 -- # '[' -z 684936 ']' 00:05:57.027 14:25:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.028 14:25:39 -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:57.028 14:25:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:57.028 14:25:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.028 14:25:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:57.028 14:25:39 -- common/autotest_common.sh@10 -- # set +x 00:05:57.028 [2024-10-01 14:25:39.421527] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:57.028 [2024-10-01 14:25:39.421633] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid684936 ] 00:05:57.028 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.028 [2024-10-01 14:25:39.498237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.287 [2024-10-01 14:25:39.593004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.287 [2024-10-01 14:25:39.593006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.853 14:25:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:57.853 14:25:40 -- common/autotest_common.sh@852 -- # return 0 00:05:57.853 14:25:40 -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.111 Malloc0 00:05:58.111 14:25:40 -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.370 Malloc1 00:05:58.370 14:25:40 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.370 14:25:40 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.370 14:25:40 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.370 14:25:40 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.370 14:25:40 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.370 14:25:40 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.370 14:25:40 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.370 14:25:40 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.370 14:25:40 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.370 14:25:40 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.370 14:25:40 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.370 14:25:40 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.370 14:25:40 -- bdev/nbd_common.sh@12 -- # local i 00:05:58.370 14:25:40 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.370 14:25:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.370 14:25:40 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:58.370 /dev/nbd0 00:05:58.370 14:25:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:58.370 14:25:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:58.370 14:25:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:58.370 14:25:40 -- common/autotest_common.sh@857 -- # local i 00:05:58.370 14:25:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:58.370 14:25:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:58.370 14:25:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:58.370 14:25:40 -- common/autotest_common.sh@861 -- # break 00:05:58.370 14:25:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:58.370 14:25:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:58.370 14:25:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.370 1+0 records in 00:05:58.370 1+0 records out 00:05:58.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263154 s, 15.6 MB/s 00:05:58.370 14:25:40 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:58.629 14:25:40 -- common/autotest_common.sh@874 -- # size=4096 00:05:58.629 14:25:40 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:58.629 14:25:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:58.629 14:25:40 -- common/autotest_common.sh@877 -- # return 0 00:05:58.629 14:25:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.629 14:25:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.629 14:25:40 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:58.629 /dev/nbd1 00:05:58.629 14:25:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:58.629 14:25:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:58.629 14:25:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:58.629 14:25:41 -- common/autotest_common.sh@857 -- # local i 00:05:58.629 14:25:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:58.629 14:25:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:58.629 14:25:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:58.629 14:25:41 -- common/autotest_common.sh@861 -- # break 00:05:58.629 14:25:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:58.629 14:25:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:58.629 14:25:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.629 1+0 records in 00:05:58.629 1+0 records out 00:05:58.629 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272239 s, 15.0 MB/s 00:05:58.629 14:25:41 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:58.629 14:25:41 -- common/autotest_common.sh@874 -- # size=4096 00:05:58.629 14:25:41 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:58.629 14:25:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:58.629 14:25:41 -- common/autotest_common.sh@877 -- # return 0 00:05:58.629 14:25:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.629 14:25:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.629 14:25:41 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.629 14:25:41 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.629 14:25:41 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.887 14:25:41 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:58.887 { 00:05:58.887 "nbd_device": "/dev/nbd0", 00:05:58.887 "bdev_name": "Malloc0" 00:05:58.887 }, 00:05:58.887 { 00:05:58.887 "nbd_device": "/dev/nbd1", 00:05:58.887 "bdev_name": "Malloc1" 00:05:58.887 } 00:05:58.887 ]' 00:05:58.887 14:25:41 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:58.887 { 00:05:58.887 "nbd_device": "/dev/nbd0", 00:05:58.887 "bdev_name": "Malloc0" 00:05:58.887 }, 00:05:58.887 { 00:05:58.887 "nbd_device": "/dev/nbd1", 00:05:58.887 "bdev_name": "Malloc1" 00:05:58.887 } 00:05:58.887 ]' 00:05:58.887 14:25:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.887 14:25:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:58.887 /dev/nbd1' 00:05:58.887 14:25:41 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:58.887 /dev/nbd1' 00:05:58.887 14:25:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.887 14:25:41 -- bdev/nbd_common.sh@65 -- # count=2 00:05:58.887 14:25:41 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:58.887 14:25:41 -- bdev/nbd_common.sh@95 -- # count=2 00:05:58.887 14:25:41 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:58.887 14:25:41 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:58.887 14:25:41 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.887 14:25:41 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.887 14:25:41 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:58.887 14:25:41 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.887 14:25:41 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:58.887 14:25:41 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:58.887 256+0 records in 00:05:58.887 256+0 records out 00:05:58.887 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116743 s, 89.8 MB/s 00:05:58.887 14:25:41 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.887 14:25:41 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:58.887 256+0 records in 00:05:58.887 256+0 records out 00:05:58.887 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223379 s, 46.9 MB/s 00:05:58.887 14:25:41 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.887 14:25:41 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:59.145 256+0 records in 00:05:59.146 256+0 records out 00:05:59.146 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223263 s, 47.0 MB/s 00:05:59.146 14:25:41 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:59.146 14:25:41 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.146 14:25:41 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.146 14:25:41 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:59.146 14:25:41 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.146 14:25:41 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:59.146 14:25:41 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:59.146 14:25:41 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.146 14:25:41 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:59.146 14:25:41 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.146 14:25:41 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:59.146 14:25:41 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.146 14:25:41 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:59.146 14:25:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.146 14:25:41 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.146 14:25:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:59.146 14:25:41 -- bdev/nbd_common.sh@51 -- # local i 00:05:59.146 14:25:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.146 14:25:41 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:59.146 14:25:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:59.146 14:25:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:59.146 14:25:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:59.146 14:25:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.146 14:25:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.146 14:25:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:59.146 14:25:41 -- bdev/nbd_common.sh@41 -- # break 00:05:59.146 14:25:41 -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.146 14:25:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.146 14:25:41 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:59.405 14:25:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:59.405 14:25:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:59.405 14:25:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:59.405 14:25:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.405 14:25:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.405 14:25:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:59.405 14:25:41 -- bdev/nbd_common.sh@41 -- # break 00:05:59.405 14:25:41 -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.405 14:25:41 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.405 14:25:41 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.405 14:25:41 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.664 14:25:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:59.664 14:25:42 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:59.664 14:25:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.664 14:25:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:59.664 14:25:42 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:59.664 14:25:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.664 14:25:42 -- bdev/nbd_common.sh@65 -- # true 00:05:59.664 14:25:42 -- bdev/nbd_common.sh@65 -- # count=0 00:05:59.664 14:25:42 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:59.664 14:25:42 -- bdev/nbd_common.sh@104 -- # count=0 00:05:59.664 14:25:42 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:59.664 14:25:42 -- bdev/nbd_common.sh@109 -- # return 0 00:05:59.664 14:25:42 -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:59.924 14:25:42 -- event/event.sh@35 -- # sleep 3 00:06:00.183 [2024-10-01 14:25:42.497577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.183 [2024-10-01 14:25:42.582364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.183 [2024-10-01 14:25:42.582366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.183 [2024-10-01 14:25:42.626657] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:00.183 [2024-10-01 14:25:42.626706] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:03.471 14:25:45 -- event/event.sh@23 -- # for i in {0..2} 00:06:03.471 14:25:45 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:03.471 spdk_app_start Round 1 00:06:03.471 14:25:45 -- event/event.sh@25 -- # waitforlisten 684936 /var/tmp/spdk-nbd.sock 00:06:03.471 14:25:45 -- common/autotest_common.sh@819 -- # '[' -z 684936 ']' 00:06:03.471 14:25:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:03.471 14:25:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:03.471 14:25:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:03.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:03.471 14:25:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:03.471 14:25:45 -- common/autotest_common.sh@10 -- # set +x 00:06:03.471 14:25:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:03.471 14:25:45 -- common/autotest_common.sh@852 -- # return 0 00:06:03.471 14:25:45 -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.471 Malloc0 00:06:03.471 14:25:45 -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.471 Malloc1 00:06:03.471 14:25:45 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.471 14:25:45 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.471 14:25:45 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.471 14:25:45 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:03.471 14:25:45 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.471 14:25:45 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:03.471 14:25:45 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.471 14:25:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.471 14:25:45 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.471 14:25:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:03.471 14:25:45 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.471 14:25:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:03.471 14:25:45 -- bdev/nbd_common.sh@12 -- # local i 00:06:03.471 14:25:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:03.471 14:25:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.471 14:25:45 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:03.730 /dev/nbd0 00:06:03.730 14:25:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:03.730 14:25:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:03.730 14:25:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:03.730 14:25:46 -- common/autotest_common.sh@857 -- # local i 00:06:03.730 14:25:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:03.730 14:25:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:03.730 14:25:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:03.730 14:25:46 -- common/autotest_common.sh@861 -- # break 00:06:03.730 14:25:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:03.730 14:25:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:03.730 14:25:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.730 1+0 records in 00:06:03.730 1+0 records out 00:06:03.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000143877 s, 28.5 MB/s 00:06:03.730 14:25:46 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:03.730 14:25:46 -- common/autotest_common.sh@874 -- # size=4096 00:06:03.730 14:25:46 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:03.730 14:25:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:03.730 14:25:46 -- common/autotest_common.sh@877 -- # return 0 00:06:03.730 14:25:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.730 14:25:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.730 14:25:46 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:03.989 /dev/nbd1 00:06:03.989 14:25:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:03.989 14:25:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:03.989 14:25:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:03.989 14:25:46 -- common/autotest_common.sh@857 -- # local i 00:06:03.989 14:25:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:03.989 14:25:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:03.989 14:25:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:03.989 14:25:46 -- common/autotest_common.sh@861 -- # break 00:06:03.989 14:25:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:03.989 14:25:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:03.989 14:25:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.989 1+0 records in 00:06:03.989 1+0 records out 00:06:03.989 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000139141 s, 29.4 MB/s 00:06:03.989 14:25:46 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:03.989 14:25:46 -- common/autotest_common.sh@874 -- # size=4096 00:06:03.989 14:25:46 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:03.989 14:25:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:03.989 14:25:46 -- common/autotest_common.sh@877 -- # return 0 00:06:03.989 14:25:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.989 14:25:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.989 14:25:46 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.989 14:25:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.989 14:25:46 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.989 14:25:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:03.989 { 00:06:03.989 "nbd_device": "/dev/nbd0", 00:06:03.989 "bdev_name": "Malloc0" 00:06:03.989 }, 00:06:03.989 { 00:06:03.989 "nbd_device": "/dev/nbd1", 00:06:03.989 "bdev_name": "Malloc1" 00:06:03.989 } 00:06:03.989 ]' 00:06:03.989 14:25:46 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:03.989 { 00:06:03.989 "nbd_device": "/dev/nbd0", 00:06:03.989 "bdev_name": "Malloc0" 00:06:03.989 }, 00:06:03.989 { 00:06:03.989 "nbd_device": "/dev/nbd1", 00:06:03.989 "bdev_name": "Malloc1" 00:06:03.989 } 00:06:03.989 ]' 00:06:03.989 14:25:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.248 14:25:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:04.248 /dev/nbd1' 00:06:04.248 14:25:46 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:04.248 /dev/nbd1' 00:06:04.248 14:25:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.248 14:25:46 -- bdev/nbd_common.sh@65 -- # count=2 00:06:04.248 14:25:46 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:04.248 14:25:46 -- bdev/nbd_common.sh@95 -- # count=2 00:06:04.248 14:25:46 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:04.248 14:25:46 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:04.248 14:25:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.248 14:25:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.248 14:25:46 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:04.248 14:25:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.248 14:25:46 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:04.248 14:25:46 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:04.248 256+0 records in 00:06:04.248 256+0 records out 00:06:04.248 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00382359 s, 274 MB/s 00:06:04.248 14:25:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.248 14:25:46 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:04.248 256+0 records in 00:06:04.248 256+0 records out 00:06:04.248 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203047 s, 51.6 MB/s 00:06:04.248 14:25:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.248 14:25:46 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:04.249 256+0 records in 00:06:04.249 256+0 records out 00:06:04.249 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218894 s, 47.9 MB/s 00:06:04.249 14:25:46 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:04.249 14:25:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.249 14:25:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.249 14:25:46 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:04.249 14:25:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.249 14:25:46 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:04.249 14:25:46 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:04.249 14:25:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.249 14:25:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:04.249 14:25:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.249 14:25:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:04.249 14:25:46 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.249 14:25:46 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:04.249 14:25:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.249 14:25:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.249 14:25:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:04.249 14:25:46 -- bdev/nbd_common.sh@51 -- # local i 00:06:04.249 14:25:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.249 14:25:46 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:04.507 14:25:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:04.507 14:25:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:04.507 14:25:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:04.507 14:25:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.507 14:25:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.507 14:25:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:04.507 14:25:46 -- bdev/nbd_common.sh@41 -- # break 00:06:04.507 14:25:46 -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.507 14:25:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.507 14:25:46 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:04.507 14:25:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:04.507 14:25:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:04.508 14:25:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:04.508 14:25:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.508 14:25:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.508 14:25:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:04.508 14:25:47 -- bdev/nbd_common.sh@41 -- # break 00:06:04.508 14:25:47 -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.508 14:25:47 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.508 14:25:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.508 14:25:47 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.766 14:25:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.766 14:25:47 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.766 14:25:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.766 14:25:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.766 14:25:47 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.766 14:25:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.766 14:25:47 -- bdev/nbd_common.sh@65 -- # true 00:06:04.766 14:25:47 -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.766 14:25:47 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.766 14:25:47 -- bdev/nbd_common.sh@104 -- # count=0 00:06:04.766 14:25:47 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:04.766 14:25:47 -- bdev/nbd_common.sh@109 -- # return 0 00:06:04.766 14:25:47 -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:05.025 14:25:47 -- event/event.sh@35 -- # sleep 3 00:06:05.284 [2024-10-01 14:25:47.631028] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.284 [2024-10-01 14:25:47.713052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.284 [2024-10-01 14:25:47.713055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.284 [2024-10-01 14:25:47.760529] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:05.284 [2024-10-01 14:25:47.760575] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:08.571 14:25:50 -- event/event.sh@23 -- # for i in {0..2} 00:06:08.571 14:25:50 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:08.571 spdk_app_start Round 2 00:06:08.571 14:25:50 -- event/event.sh@25 -- # waitforlisten 684936 /var/tmp/spdk-nbd.sock 00:06:08.571 14:25:50 -- common/autotest_common.sh@819 -- # '[' -z 684936 ']' 00:06:08.571 14:25:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:08.571 14:25:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:08.571 14:25:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:08.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:08.571 14:25:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:08.571 14:25:50 -- common/autotest_common.sh@10 -- # set +x 00:06:08.571 14:25:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:08.571 14:25:50 -- common/autotest_common.sh@852 -- # return 0 00:06:08.571 14:25:50 -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.571 Malloc0 00:06:08.571 14:25:50 -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.571 Malloc1 00:06:08.571 14:25:51 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.571 14:25:51 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.571 14:25:51 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.571 14:25:51 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:08.571 14:25:51 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.571 14:25:51 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:08.571 14:25:51 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.571 14:25:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.571 14:25:51 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.571 14:25:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:08.571 14:25:51 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.571 14:25:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:08.571 14:25:51 -- bdev/nbd_common.sh@12 -- # local i 00:06:08.571 14:25:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:08.571 14:25:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.571 14:25:51 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:08.831 /dev/nbd0 00:06:08.831 14:25:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:08.831 14:25:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:08.831 14:25:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:08.831 14:25:51 -- common/autotest_common.sh@857 -- # local i 00:06:08.831 14:25:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:08.831 14:25:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:08.831 14:25:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:08.831 14:25:51 -- common/autotest_common.sh@861 -- # break 00:06:08.831 14:25:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:08.831 14:25:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:08.831 14:25:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.831 1+0 records in 00:06:08.831 1+0 records out 00:06:08.831 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269105 s, 15.2 MB/s 00:06:08.831 14:25:51 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:08.831 14:25:51 -- common/autotest_common.sh@874 -- # size=4096 00:06:08.831 14:25:51 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:08.831 14:25:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:08.831 14:25:51 -- common/autotest_common.sh@877 -- # return 0 00:06:08.831 14:25:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.831 14:25:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.831 14:25:51 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:09.091 /dev/nbd1 00:06:09.091 14:25:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:09.091 14:25:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:09.091 14:25:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:09.091 14:25:51 -- common/autotest_common.sh@857 -- # local i 00:06:09.091 14:25:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:09.091 14:25:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:09.091 14:25:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:09.091 14:25:51 -- common/autotest_common.sh@861 -- # break 00:06:09.091 14:25:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:09.091 14:25:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:09.091 14:25:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.091 1+0 records in 00:06:09.091 1+0 records out 00:06:09.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000171899 s, 23.8 MB/s 00:06:09.091 14:25:51 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:09.091 14:25:51 -- common/autotest_common.sh@874 -- # size=4096 00:06:09.091 14:25:51 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:09.091 14:25:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:09.091 14:25:51 -- common/autotest_common.sh@877 -- # return 0 00:06:09.091 14:25:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.091 14:25:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.091 14:25:51 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.091 14:25:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.091 14:25:51 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:09.350 { 00:06:09.350 "nbd_device": "/dev/nbd0", 00:06:09.350 "bdev_name": "Malloc0" 00:06:09.350 }, 00:06:09.350 { 00:06:09.350 "nbd_device": "/dev/nbd1", 00:06:09.350 "bdev_name": "Malloc1" 00:06:09.350 } 00:06:09.350 ]' 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:09.350 { 00:06:09.350 "nbd_device": "/dev/nbd0", 00:06:09.350 "bdev_name": "Malloc0" 00:06:09.350 }, 00:06:09.350 { 00:06:09.350 "nbd_device": "/dev/nbd1", 00:06:09.350 "bdev_name": "Malloc1" 00:06:09.350 } 00:06:09.350 ]' 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:09.350 /dev/nbd1' 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:09.350 /dev/nbd1' 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@65 -- # count=2 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@95 -- # count=2 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:09.350 256+0 records in 00:06:09.350 256+0 records out 00:06:09.350 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112677 s, 93.1 MB/s 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.350 256+0 records in 00:06:09.350 256+0 records out 00:06:09.350 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200652 s, 52.3 MB/s 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.350 256+0 records in 00:06:09.350 256+0 records out 00:06:09.350 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021966 s, 47.7 MB/s 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.350 14:25:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.351 14:25:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.351 14:25:51 -- bdev/nbd_common.sh@51 -- # local i 00:06:09.351 14:25:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.351 14:25:51 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:09.610 14:25:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:09.610 14:25:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:09.610 14:25:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:09.610 14:25:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.610 14:25:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.610 14:25:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:09.610 14:25:51 -- bdev/nbd_common.sh@41 -- # break 00:06:09.610 14:25:51 -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.610 14:25:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.610 14:25:51 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:09.869 14:25:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:09.869 14:25:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:09.869 14:25:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:09.869 14:25:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.869 14:25:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.869 14:25:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:09.869 14:25:52 -- bdev/nbd_common.sh@41 -- # break 00:06:09.869 14:25:52 -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.869 14:25:52 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.869 14:25:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.869 14:25:52 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.129 14:25:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:10.129 14:25:52 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:10.129 14:25:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.129 14:25:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.129 14:25:52 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.129 14:25:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.129 14:25:52 -- bdev/nbd_common.sh@65 -- # true 00:06:10.129 14:25:52 -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.129 14:25:52 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.129 14:25:52 -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.129 14:25:52 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.129 14:25:52 -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.129 14:25:52 -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:10.129 14:25:52 -- event/event.sh@35 -- # sleep 3 00:06:10.388 [2024-10-01 14:25:52.837210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.647 [2024-10-01 14:25:52.921555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.647 [2024-10-01 14:25:52.921557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.647 [2024-10-01 14:25:52.968910] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:10.647 [2024-10-01 14:25:52.968953] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:13.184 14:25:55 -- event/event.sh@38 -- # waitforlisten 684936 /var/tmp/spdk-nbd.sock 00:06:13.184 14:25:55 -- common/autotest_common.sh@819 -- # '[' -z 684936 ']' 00:06:13.184 14:25:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.184 14:25:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:13.184 14:25:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.184 14:25:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:13.184 14:25:55 -- common/autotest_common.sh@10 -- # set +x 00:06:13.443 14:25:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:13.443 14:25:55 -- common/autotest_common.sh@852 -- # return 0 00:06:13.443 14:25:55 -- event/event.sh@39 -- # killprocess 684936 00:06:13.443 14:25:55 -- common/autotest_common.sh@926 -- # '[' -z 684936 ']' 00:06:13.443 14:25:55 -- common/autotest_common.sh@930 -- # kill -0 684936 00:06:13.443 14:25:55 -- common/autotest_common.sh@931 -- # uname 00:06:13.443 14:25:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:13.443 14:25:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 684936 00:06:13.443 14:25:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:13.443 14:25:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:13.443 14:25:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 684936' 00:06:13.443 killing process with pid 684936 00:06:13.443 14:25:55 -- common/autotest_common.sh@945 -- # kill 684936 00:06:13.443 14:25:55 -- common/autotest_common.sh@950 -- # wait 684936 00:06:13.704 spdk_app_start is called in Round 0. 00:06:13.704 Shutdown signal received, stop current app iteration 00:06:13.704 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:06:13.704 spdk_app_start is called in Round 1. 00:06:13.704 Shutdown signal received, stop current app iteration 00:06:13.704 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:06:13.704 spdk_app_start is called in Round 2. 00:06:13.704 Shutdown signal received, stop current app iteration 00:06:13.704 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:06:13.704 spdk_app_start is called in Round 3. 00:06:13.704 Shutdown signal received, stop current app iteration 00:06:13.704 14:25:56 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:13.704 14:25:56 -- event/event.sh@42 -- # return 0 00:06:13.704 00:06:13.704 real 0m16.655s 00:06:13.704 user 0m35.484s 00:06:13.704 sys 0m3.136s 00:06:13.704 14:25:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.704 14:25:56 -- common/autotest_common.sh@10 -- # set +x 00:06:13.704 ************************************ 00:06:13.704 END TEST app_repeat 00:06:13.704 ************************************ 00:06:13.704 14:25:56 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:13.704 14:25:56 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:13.704 14:25:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:13.704 14:25:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.704 14:25:56 -- common/autotest_common.sh@10 -- # set +x 00:06:13.704 ************************************ 00:06:13.704 START TEST cpu_locks 00:06:13.704 ************************************ 00:06:13.704 14:25:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:13.704 * Looking for test storage... 00:06:13.704 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:13.704 14:25:56 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:13.704 14:25:56 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:13.704 14:25:56 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:13.704 14:25:56 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:13.704 14:25:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:13.704 14:25:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.704 14:25:56 -- common/autotest_common.sh@10 -- # set +x 00:06:13.704 ************************************ 00:06:13.704 START TEST default_locks 00:06:13.704 ************************************ 00:06:13.704 14:25:56 -- common/autotest_common.sh@1104 -- # default_locks 00:06:13.704 14:25:56 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=687474 00:06:13.704 14:25:56 -- event/cpu_locks.sh@47 -- # waitforlisten 687474 00:06:13.704 14:25:56 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.704 14:25:56 -- common/autotest_common.sh@819 -- # '[' -z 687474 ']' 00:06:13.704 14:25:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.704 14:25:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:13.704 14:25:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.704 14:25:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:13.704 14:25:56 -- common/autotest_common.sh@10 -- # set +x 00:06:13.704 [2024-10-01 14:25:56.224637] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:13.704 [2024-10-01 14:25:56.224707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid687474 ] 00:06:13.963 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.963 [2024-10-01 14:25:56.299252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.963 [2024-10-01 14:25:56.390288] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:13.963 [2024-10-01 14:25:56.390401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.901 14:25:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:14.901 14:25:57 -- common/autotest_common.sh@852 -- # return 0 00:06:14.901 14:25:57 -- event/cpu_locks.sh@49 -- # locks_exist 687474 00:06:14.901 14:25:57 -- event/cpu_locks.sh@22 -- # lslocks -p 687474 00:06:14.901 14:25:57 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.901 lslocks: write error 00:06:14.901 14:25:57 -- event/cpu_locks.sh@50 -- # killprocess 687474 00:06:14.901 14:25:57 -- common/autotest_common.sh@926 -- # '[' -z 687474 ']' 00:06:14.901 14:25:57 -- common/autotest_common.sh@930 -- # kill -0 687474 00:06:14.901 14:25:57 -- common/autotest_common.sh@931 -- # uname 00:06:14.901 14:25:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:14.901 14:25:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 687474 00:06:14.901 14:25:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:14.901 14:25:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:14.901 14:25:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 687474' 00:06:14.901 killing process with pid 687474 00:06:14.901 14:25:57 -- common/autotest_common.sh@945 -- # kill 687474 00:06:14.901 14:25:57 -- common/autotest_common.sh@950 -- # wait 687474 00:06:15.469 14:25:57 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 687474 00:06:15.469 14:25:57 -- common/autotest_common.sh@640 -- # local es=0 00:06:15.469 14:25:57 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 687474 00:06:15.469 14:25:57 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:15.469 14:25:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:15.469 14:25:57 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:15.469 14:25:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:15.469 14:25:57 -- common/autotest_common.sh@643 -- # waitforlisten 687474 00:06:15.469 14:25:57 -- common/autotest_common.sh@819 -- # '[' -z 687474 ']' 00:06:15.469 14:25:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.469 14:25:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:15.469 14:25:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.469 14:25:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:15.469 14:25:57 -- common/autotest_common.sh@10 -- # set +x 00:06:15.469 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (687474) - No such process 00:06:15.469 ERROR: process (pid: 687474) is no longer running 00:06:15.469 14:25:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:15.469 14:25:57 -- common/autotest_common.sh@852 -- # return 1 00:06:15.469 14:25:57 -- common/autotest_common.sh@643 -- # es=1 00:06:15.469 14:25:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:15.469 14:25:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:15.469 14:25:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:15.469 14:25:57 -- event/cpu_locks.sh@54 -- # no_locks 00:06:15.469 14:25:57 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:15.469 14:25:57 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:15.469 14:25:57 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:15.469 00:06:15.469 real 0m1.550s 00:06:15.469 user 0m1.606s 00:06:15.469 sys 0m0.548s 00:06:15.469 14:25:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.469 14:25:57 -- common/autotest_common.sh@10 -- # set +x 00:06:15.469 ************************************ 00:06:15.469 END TEST default_locks 00:06:15.470 ************************************ 00:06:15.470 14:25:57 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:15.470 14:25:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:15.470 14:25:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.470 14:25:57 -- common/autotest_common.sh@10 -- # set +x 00:06:15.470 ************************************ 00:06:15.470 START TEST default_locks_via_rpc 00:06:15.470 ************************************ 00:06:15.470 14:25:57 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:15.470 14:25:57 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=687693 00:06:15.470 14:25:57 -- event/cpu_locks.sh@63 -- # waitforlisten 687693 00:06:15.470 14:25:57 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.470 14:25:57 -- common/autotest_common.sh@819 -- # '[' -z 687693 ']' 00:06:15.470 14:25:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.470 14:25:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:15.470 14:25:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.470 14:25:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:15.470 14:25:57 -- common/autotest_common.sh@10 -- # set +x 00:06:15.470 [2024-10-01 14:25:57.819348] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:15.470 [2024-10-01 14:25:57.819418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid687693 ] 00:06:15.470 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.470 [2024-10-01 14:25:57.890022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.470 [2024-10-01 14:25:57.979537] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:15.470 [2024-10-01 14:25:57.979654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.407 14:25:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:16.407 14:25:58 -- common/autotest_common.sh@852 -- # return 0 00:06:16.407 14:25:58 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:16.407 14:25:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:16.407 14:25:58 -- common/autotest_common.sh@10 -- # set +x 00:06:16.407 14:25:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:16.407 14:25:58 -- event/cpu_locks.sh@67 -- # no_locks 00:06:16.407 14:25:58 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:16.407 14:25:58 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:16.407 14:25:58 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:16.407 14:25:58 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:16.407 14:25:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:16.407 14:25:58 -- common/autotest_common.sh@10 -- # set +x 00:06:16.407 14:25:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:16.407 14:25:58 -- event/cpu_locks.sh@71 -- # locks_exist 687693 00:06:16.407 14:25:58 -- event/cpu_locks.sh@22 -- # lslocks -p 687693 00:06:16.407 14:25:58 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.976 14:25:59 -- event/cpu_locks.sh@73 -- # killprocess 687693 00:06:16.976 14:25:59 -- common/autotest_common.sh@926 -- # '[' -z 687693 ']' 00:06:16.976 14:25:59 -- common/autotest_common.sh@930 -- # kill -0 687693 00:06:16.976 14:25:59 -- common/autotest_common.sh@931 -- # uname 00:06:16.976 14:25:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:16.976 14:25:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 687693 00:06:16.976 14:25:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:16.976 14:25:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:16.976 14:25:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 687693' 00:06:16.976 killing process with pid 687693 00:06:16.976 14:25:59 -- common/autotest_common.sh@945 -- # kill 687693 00:06:16.976 14:25:59 -- common/autotest_common.sh@950 -- # wait 687693 00:06:17.235 00:06:17.235 real 0m1.798s 00:06:17.235 user 0m1.883s 00:06:17.235 sys 0m0.620s 00:06:17.235 14:25:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.235 14:25:59 -- common/autotest_common.sh@10 -- # set +x 00:06:17.235 ************************************ 00:06:17.235 END TEST default_locks_via_rpc 00:06:17.235 ************************************ 00:06:17.235 14:25:59 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:17.235 14:25:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:17.235 14:25:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.235 14:25:59 -- common/autotest_common.sh@10 -- # set +x 00:06:17.235 ************************************ 00:06:17.235 START TEST non_locking_app_on_locked_coremask 00:06:17.235 ************************************ 00:06:17.235 14:25:59 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:17.235 14:25:59 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=687911 00:06:17.235 14:25:59 -- event/cpu_locks.sh@81 -- # waitforlisten 687911 /var/tmp/spdk.sock 00:06:17.235 14:25:59 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.235 14:25:59 -- common/autotest_common.sh@819 -- # '[' -z 687911 ']' 00:06:17.236 14:25:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.236 14:25:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:17.236 14:25:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.236 14:25:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:17.236 14:25:59 -- common/autotest_common.sh@10 -- # set +x 00:06:17.236 [2024-10-01 14:25:59.666521] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:17.236 [2024-10-01 14:25:59.666599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid687911 ] 00:06:17.236 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.236 [2024-10-01 14:25:59.741931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.495 [2024-10-01 14:25:59.833744] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:17.495 [2024-10-01 14:25:59.833887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.061 14:26:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:18.061 14:26:00 -- common/autotest_common.sh@852 -- # return 0 00:06:18.061 14:26:00 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:18.061 14:26:00 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=688088 00:06:18.061 14:26:00 -- event/cpu_locks.sh@85 -- # waitforlisten 688088 /var/tmp/spdk2.sock 00:06:18.061 14:26:00 -- common/autotest_common.sh@819 -- # '[' -z 688088 ']' 00:06:18.061 14:26:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.061 14:26:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:18.062 14:26:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.062 14:26:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:18.062 14:26:00 -- common/autotest_common.sh@10 -- # set +x 00:06:18.062 [2024-10-01 14:26:00.522200] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:18.062 [2024-10-01 14:26:00.522255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid688088 ] 00:06:18.062 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.382 [2024-10-01 14:26:00.621521] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.382 [2024-10-01 14:26:00.621550] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.382 [2024-10-01 14:26:00.788316] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:18.382 [2024-10-01 14:26:00.788429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.018 14:26:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:19.018 14:26:01 -- common/autotest_common.sh@852 -- # return 0 00:06:19.018 14:26:01 -- event/cpu_locks.sh@87 -- # locks_exist 687911 00:06:19.018 14:26:01 -- event/cpu_locks.sh@22 -- # lslocks -p 687911 00:06:19.018 14:26:01 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.588 lslocks: write error 00:06:19.588 14:26:01 -- event/cpu_locks.sh@89 -- # killprocess 687911 00:06:19.588 14:26:01 -- common/autotest_common.sh@926 -- # '[' -z 687911 ']' 00:06:19.588 14:26:01 -- common/autotest_common.sh@930 -- # kill -0 687911 00:06:19.588 14:26:01 -- common/autotest_common.sh@931 -- # uname 00:06:19.588 14:26:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:19.588 14:26:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 687911 00:06:19.588 14:26:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:19.588 14:26:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:19.588 14:26:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 687911' 00:06:19.588 killing process with pid 687911 00:06:19.588 14:26:01 -- common/autotest_common.sh@945 -- # kill 687911 00:06:19.588 14:26:01 -- common/autotest_common.sh@950 -- # wait 687911 00:06:20.156 14:26:02 -- event/cpu_locks.sh@90 -- # killprocess 688088 00:06:20.156 14:26:02 -- common/autotest_common.sh@926 -- # '[' -z 688088 ']' 00:06:20.156 14:26:02 -- common/autotest_common.sh@930 -- # kill -0 688088 00:06:20.156 14:26:02 -- common/autotest_common.sh@931 -- # uname 00:06:20.156 14:26:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:20.156 14:26:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 688088 00:06:20.416 14:26:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:20.416 14:26:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:20.416 14:26:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 688088' 00:06:20.416 killing process with pid 688088 00:06:20.416 14:26:02 -- common/autotest_common.sh@945 -- # kill 688088 00:06:20.416 14:26:02 -- common/autotest_common.sh@950 -- # wait 688088 00:06:20.674 00:06:20.674 real 0m3.412s 00:06:20.674 user 0m3.602s 00:06:20.674 sys 0m1.120s 00:06:20.674 14:26:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.674 14:26:03 -- common/autotest_common.sh@10 -- # set +x 00:06:20.674 ************************************ 00:06:20.674 END TEST non_locking_app_on_locked_coremask 00:06:20.674 ************************************ 00:06:20.674 14:26:03 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:20.674 14:26:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:20.674 14:26:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:20.674 14:26:03 -- common/autotest_common.sh@10 -- # set +x 00:06:20.674 ************************************ 00:06:20.674 START TEST locking_app_on_unlocked_coremask 00:06:20.674 ************************************ 00:06:20.674 14:26:03 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:20.675 14:26:03 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=688483 00:06:20.675 14:26:03 -- event/cpu_locks.sh@99 -- # waitforlisten 688483 /var/tmp/spdk.sock 00:06:20.675 14:26:03 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:20.675 14:26:03 -- common/autotest_common.sh@819 -- # '[' -z 688483 ']' 00:06:20.675 14:26:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.675 14:26:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:20.675 14:26:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.675 14:26:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:20.675 14:26:03 -- common/autotest_common.sh@10 -- # set +x 00:06:20.675 [2024-10-01 14:26:03.124670] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:20.675 [2024-10-01 14:26:03.124761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid688483 ] 00:06:20.675 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.675 [2024-10-01 14:26:03.197114] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.675 [2024-10-01 14:26:03.197148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.933 [2024-10-01 14:26:03.282515] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:20.933 [2024-10-01 14:26:03.282630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.502 14:26:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:21.502 14:26:03 -- common/autotest_common.sh@852 -- # return 0 00:06:21.502 14:26:03 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:21.502 14:26:03 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=688498 00:06:21.502 14:26:03 -- event/cpu_locks.sh@103 -- # waitforlisten 688498 /var/tmp/spdk2.sock 00:06:21.502 14:26:03 -- common/autotest_common.sh@819 -- # '[' -z 688498 ']' 00:06:21.502 14:26:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.502 14:26:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:21.502 14:26:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.502 14:26:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:21.502 14:26:03 -- common/autotest_common.sh@10 -- # set +x 00:06:21.502 [2024-10-01 14:26:03.977991] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:21.502 [2024-10-01 14:26:03.978077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid688498 ] 00:06:21.502 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.761 [2024-10-01 14:26:04.076698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.761 [2024-10-01 14:26:04.235969] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:21.761 [2024-10-01 14:26:04.236088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.328 14:26:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:22.328 14:26:04 -- common/autotest_common.sh@852 -- # return 0 00:06:22.328 14:26:04 -- event/cpu_locks.sh@105 -- # locks_exist 688498 00:06:22.328 14:26:04 -- event/cpu_locks.sh@22 -- # lslocks -p 688498 00:06:22.328 14:26:04 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.709 lslocks: write error 00:06:23.709 14:26:06 -- event/cpu_locks.sh@107 -- # killprocess 688483 00:06:23.709 14:26:06 -- common/autotest_common.sh@926 -- # '[' -z 688483 ']' 00:06:23.709 14:26:06 -- common/autotest_common.sh@930 -- # kill -0 688483 00:06:23.709 14:26:06 -- common/autotest_common.sh@931 -- # uname 00:06:23.709 14:26:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:23.709 14:26:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 688483 00:06:23.709 14:26:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:23.709 14:26:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:23.709 14:26:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 688483' 00:06:23.709 killing process with pid 688483 00:06:23.709 14:26:06 -- common/autotest_common.sh@945 -- # kill 688483 00:06:23.709 14:26:06 -- common/autotest_common.sh@950 -- # wait 688483 00:06:24.648 14:26:06 -- event/cpu_locks.sh@108 -- # killprocess 688498 00:06:24.648 14:26:06 -- common/autotest_common.sh@926 -- # '[' -z 688498 ']' 00:06:24.648 14:26:06 -- common/autotest_common.sh@930 -- # kill -0 688498 00:06:24.648 14:26:06 -- common/autotest_common.sh@931 -- # uname 00:06:24.648 14:26:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:24.648 14:26:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 688498 00:06:24.648 14:26:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:24.648 14:26:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:24.648 14:26:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 688498' 00:06:24.648 killing process with pid 688498 00:06:24.648 14:26:06 -- common/autotest_common.sh@945 -- # kill 688498 00:06:24.648 14:26:06 -- common/autotest_common.sh@950 -- # wait 688498 00:06:24.907 00:06:24.907 real 0m4.134s 00:06:24.907 user 0m4.403s 00:06:24.907 sys 0m1.383s 00:06:24.907 14:26:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.907 14:26:07 -- common/autotest_common.sh@10 -- # set +x 00:06:24.907 ************************************ 00:06:24.907 END TEST locking_app_on_unlocked_coremask 00:06:24.907 ************************************ 00:06:24.907 14:26:07 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:24.907 14:26:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:24.907 14:26:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.907 14:26:07 -- common/autotest_common.sh@10 -- # set +x 00:06:24.907 ************************************ 00:06:24.907 START TEST locking_app_on_locked_coremask 00:06:24.907 ************************************ 00:06:24.907 14:26:07 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:24.907 14:26:07 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=689059 00:06:24.907 14:26:07 -- event/cpu_locks.sh@116 -- # waitforlisten 689059 /var/tmp/spdk.sock 00:06:24.907 14:26:07 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.907 14:26:07 -- common/autotest_common.sh@819 -- # '[' -z 689059 ']' 00:06:24.907 14:26:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.907 14:26:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:24.907 14:26:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.907 14:26:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:24.907 14:26:07 -- common/autotest_common.sh@10 -- # set +x 00:06:24.907 [2024-10-01 14:26:07.310394] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:24.907 [2024-10-01 14:26:07.310471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid689059 ] 00:06:24.907 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.907 [2024-10-01 14:26:07.383684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.167 [2024-10-01 14:26:07.466477] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:25.167 [2024-10-01 14:26:07.466595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.736 14:26:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:25.736 14:26:08 -- common/autotest_common.sh@852 -- # return 0 00:06:25.736 14:26:08 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=689144 00:06:25.736 14:26:08 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 689144 /var/tmp/spdk2.sock 00:06:25.736 14:26:08 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:25.736 14:26:08 -- common/autotest_common.sh@640 -- # local es=0 00:06:25.737 14:26:08 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 689144 /var/tmp/spdk2.sock 00:06:25.737 14:26:08 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:25.737 14:26:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:25.737 14:26:08 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:25.737 14:26:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:25.737 14:26:08 -- common/autotest_common.sh@643 -- # waitforlisten 689144 /var/tmp/spdk2.sock 00:06:25.737 14:26:08 -- common/autotest_common.sh@819 -- # '[' -z 689144 ']' 00:06:25.737 14:26:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.737 14:26:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:25.737 14:26:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.737 14:26:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:25.737 14:26:08 -- common/autotest_common.sh@10 -- # set +x 00:06:25.737 [2024-10-01 14:26:08.177444] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:25.737 [2024-10-01 14:26:08.177510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid689144 ] 00:06:25.737 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.996 [2024-10-01 14:26:08.278831] app.c: 666:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 689059 has claimed it. 00:06:25.996 [2024-10-01 14:26:08.278868] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:26.565 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (689144) - No such process 00:06:26.565 ERROR: process (pid: 689144) is no longer running 00:06:26.565 14:26:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:26.565 14:26:08 -- common/autotest_common.sh@852 -- # return 1 00:06:26.565 14:26:08 -- common/autotest_common.sh@643 -- # es=1 00:06:26.565 14:26:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:26.565 14:26:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:26.565 14:26:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:26.565 14:26:08 -- event/cpu_locks.sh@122 -- # locks_exist 689059 00:06:26.565 14:26:08 -- event/cpu_locks.sh@22 -- # lslocks -p 689059 00:06:26.565 14:26:08 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.132 lslocks: write error 00:06:27.132 14:26:09 -- event/cpu_locks.sh@124 -- # killprocess 689059 00:06:27.132 14:26:09 -- common/autotest_common.sh@926 -- # '[' -z 689059 ']' 00:06:27.132 14:26:09 -- common/autotest_common.sh@930 -- # kill -0 689059 00:06:27.132 14:26:09 -- common/autotest_common.sh@931 -- # uname 00:06:27.132 14:26:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:27.132 14:26:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 689059 00:06:27.132 14:26:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:27.132 14:26:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:27.132 14:26:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 689059' 00:06:27.132 killing process with pid 689059 00:06:27.132 14:26:09 -- common/autotest_common.sh@945 -- # kill 689059 00:06:27.132 14:26:09 -- common/autotest_common.sh@950 -- # wait 689059 00:06:27.392 00:06:27.392 real 0m2.527s 00:06:27.392 user 0m2.765s 00:06:27.392 sys 0m0.793s 00:06:27.392 14:26:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.392 14:26:09 -- common/autotest_common.sh@10 -- # set +x 00:06:27.392 ************************************ 00:06:27.392 END TEST locking_app_on_locked_coremask 00:06:27.392 ************************************ 00:06:27.392 14:26:09 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:27.392 14:26:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:27.392 14:26:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:27.392 14:26:09 -- common/autotest_common.sh@10 -- # set +x 00:06:27.392 ************************************ 00:06:27.392 START TEST locking_overlapped_coremask 00:06:27.392 ************************************ 00:06:27.392 14:26:09 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:27.392 14:26:09 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=689451 00:06:27.392 14:26:09 -- event/cpu_locks.sh@133 -- # waitforlisten 689451 /var/tmp/spdk.sock 00:06:27.392 14:26:09 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:27.392 14:26:09 -- common/autotest_common.sh@819 -- # '[' -z 689451 ']' 00:06:27.392 14:26:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.392 14:26:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:27.392 14:26:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.392 14:26:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:27.392 14:26:09 -- common/autotest_common.sh@10 -- # set +x 00:06:27.392 [2024-10-01 14:26:09.888858] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:27.392 [2024-10-01 14:26:09.888959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid689451 ] 00:06:27.652 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.652 [2024-10-01 14:26:09.965055] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:27.652 [2024-10-01 14:26:10.055015] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:27.652 [2024-10-01 14:26:10.055163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.652 [2024-10-01 14:26:10.055179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.652 [2024-10-01 14:26:10.055180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.218 14:26:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:28.218 14:26:10 -- common/autotest_common.sh@852 -- # return 0 00:06:28.218 14:26:10 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=689485 00:06:28.218 14:26:10 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 689485 /var/tmp/spdk2.sock 00:06:28.218 14:26:10 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:28.218 14:26:10 -- common/autotest_common.sh@640 -- # local es=0 00:06:28.218 14:26:10 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 689485 /var/tmp/spdk2.sock 00:06:28.218 14:26:10 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:28.218 14:26:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:28.218 14:26:10 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:28.218 14:26:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:28.218 14:26:10 -- common/autotest_common.sh@643 -- # waitforlisten 689485 /var/tmp/spdk2.sock 00:06:28.218 14:26:10 -- common/autotest_common.sh@819 -- # '[' -z 689485 ']' 00:06:28.219 14:26:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.219 14:26:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:28.219 14:26:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.219 14:26:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:28.219 14:26:10 -- common/autotest_common.sh@10 -- # set +x 00:06:28.478 [2024-10-01 14:26:10.750610] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:28.478 [2024-10-01 14:26:10.750676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid689485 ] 00:06:28.478 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.478 [2024-10-01 14:26:10.851717] app.c: 666:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 689451 has claimed it. 00:06:28.478 [2024-10-01 14:26:10.851765] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:29.047 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (689485) - No such process 00:06:29.047 ERROR: process (pid: 689485) is no longer running 00:06:29.047 14:26:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:29.047 14:26:11 -- common/autotest_common.sh@852 -- # return 1 00:06:29.047 14:26:11 -- common/autotest_common.sh@643 -- # es=1 00:06:29.047 14:26:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:29.047 14:26:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:29.047 14:26:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:29.047 14:26:11 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:29.047 14:26:11 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:29.047 14:26:11 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:29.047 14:26:11 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:29.047 14:26:11 -- event/cpu_locks.sh@141 -- # killprocess 689451 00:06:29.047 14:26:11 -- common/autotest_common.sh@926 -- # '[' -z 689451 ']' 00:06:29.047 14:26:11 -- common/autotest_common.sh@930 -- # kill -0 689451 00:06:29.047 14:26:11 -- common/autotest_common.sh@931 -- # uname 00:06:29.047 14:26:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:29.047 14:26:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 689451 00:06:29.047 14:26:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:29.047 14:26:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:29.047 14:26:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 689451' 00:06:29.047 killing process with pid 689451 00:06:29.047 14:26:11 -- common/autotest_common.sh@945 -- # kill 689451 00:06:29.047 14:26:11 -- common/autotest_common.sh@950 -- # wait 689451 00:06:29.307 00:06:29.307 real 0m1.945s 00:06:29.307 user 0m5.442s 00:06:29.307 sys 0m0.487s 00:06:29.307 14:26:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.307 14:26:11 -- common/autotest_common.sh@10 -- # set +x 00:06:29.307 ************************************ 00:06:29.307 END TEST locking_overlapped_coremask 00:06:29.307 ************************************ 00:06:29.567 14:26:11 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:29.567 14:26:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:29.567 14:26:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:29.567 14:26:11 -- common/autotest_common.sh@10 -- # set +x 00:06:29.567 ************************************ 00:06:29.567 START TEST locking_overlapped_coremask_via_rpc 00:06:29.567 ************************************ 00:06:29.567 14:26:11 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:29.567 14:26:11 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=689678 00:06:29.567 14:26:11 -- event/cpu_locks.sh@149 -- # waitforlisten 689678 /var/tmp/spdk.sock 00:06:29.567 14:26:11 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:29.567 14:26:11 -- common/autotest_common.sh@819 -- # '[' -z 689678 ']' 00:06:29.567 14:26:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.567 14:26:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:29.567 14:26:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.567 14:26:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:29.567 14:26:11 -- common/autotest_common.sh@10 -- # set +x 00:06:29.567 [2024-10-01 14:26:11.880807] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:29.567 [2024-10-01 14:26:11.880897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid689678 ] 00:06:29.567 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.567 [2024-10-01 14:26:11.960482] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:29.567 [2024-10-01 14:26:11.960515] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.567 [2024-10-01 14:26:12.051099] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:29.567 [2024-10-01 14:26:12.051264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.567 [2024-10-01 14:26:12.051349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.567 [2024-10-01 14:26:12.051351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.505 14:26:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:30.505 14:26:12 -- common/autotest_common.sh@852 -- # return 0 00:06:30.505 14:26:12 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=689859 00:06:30.505 14:26:12 -- event/cpu_locks.sh@153 -- # waitforlisten 689859 /var/tmp/spdk2.sock 00:06:30.505 14:26:12 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:30.505 14:26:12 -- common/autotest_common.sh@819 -- # '[' -z 689859 ']' 00:06:30.505 14:26:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.505 14:26:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:30.505 14:26:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.505 14:26:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:30.505 14:26:12 -- common/autotest_common.sh@10 -- # set +x 00:06:30.505 [2024-10-01 14:26:12.757490] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:30.505 [2024-10-01 14:26:12.757581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid689859 ] 00:06:30.505 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.505 [2024-10-01 14:26:12.856678] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.505 [2024-10-01 14:26:12.856705] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.505 [2024-10-01 14:26:13.016877] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:30.505 [2024-10-01 14:26:13.017053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.505 [2024-10-01 14:26:13.020768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.505 [2024-10-01 14:26:13.020770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:31.444 14:26:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:31.444 14:26:13 -- common/autotest_common.sh@852 -- # return 0 00:06:31.444 14:26:13 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:31.444 14:26:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:31.444 14:26:13 -- common/autotest_common.sh@10 -- # set +x 00:06:31.444 14:26:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:31.444 14:26:13 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.444 14:26:13 -- common/autotest_common.sh@640 -- # local es=0 00:06:31.444 14:26:13 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.444 14:26:13 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:31.444 14:26:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:31.444 14:26:13 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:31.444 14:26:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:31.444 14:26:13 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.444 14:26:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:31.444 14:26:13 -- common/autotest_common.sh@10 -- # set +x 00:06:31.444 [2024-10-01 14:26:13.639779] app.c: 666:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 689678 has claimed it. 00:06:31.444 request: 00:06:31.444 { 00:06:31.444 "method": "framework_enable_cpumask_locks", 00:06:31.444 "req_id": 1 00:06:31.444 } 00:06:31.444 Got JSON-RPC error response 00:06:31.444 response: 00:06:31.444 { 00:06:31.444 "code": -32603, 00:06:31.444 "message": "Failed to claim CPU core: 2" 00:06:31.444 } 00:06:31.444 14:26:13 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:31.444 14:26:13 -- common/autotest_common.sh@643 -- # es=1 00:06:31.444 14:26:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:31.444 14:26:13 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:31.444 14:26:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:31.444 14:26:13 -- event/cpu_locks.sh@158 -- # waitforlisten 689678 /var/tmp/spdk.sock 00:06:31.444 14:26:13 -- common/autotest_common.sh@819 -- # '[' -z 689678 ']' 00:06:31.444 14:26:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.444 14:26:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:31.444 14:26:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.444 14:26:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:31.444 14:26:13 -- common/autotest_common.sh@10 -- # set +x 00:06:31.444 14:26:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:31.444 14:26:13 -- common/autotest_common.sh@852 -- # return 0 00:06:31.444 14:26:13 -- event/cpu_locks.sh@159 -- # waitforlisten 689859 /var/tmp/spdk2.sock 00:06:31.444 14:26:13 -- common/autotest_common.sh@819 -- # '[' -z 689859 ']' 00:06:31.444 14:26:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.444 14:26:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:31.444 14:26:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.444 14:26:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:31.444 14:26:13 -- common/autotest_common.sh@10 -- # set +x 00:06:31.703 14:26:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:31.703 14:26:14 -- common/autotest_common.sh@852 -- # return 0 00:06:31.703 14:26:14 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:31.703 14:26:14 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:31.703 14:26:14 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:31.703 14:26:14 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:31.703 00:06:31.703 real 0m2.181s 00:06:31.703 user 0m0.897s 00:06:31.703 sys 0m0.215s 00:06:31.703 14:26:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.703 14:26:14 -- common/autotest_common.sh@10 -- # set +x 00:06:31.703 ************************************ 00:06:31.703 END TEST locking_overlapped_coremask_via_rpc 00:06:31.703 ************************************ 00:06:31.703 14:26:14 -- event/cpu_locks.sh@174 -- # cleanup 00:06:31.703 14:26:14 -- event/cpu_locks.sh@15 -- # [[ -z 689678 ]] 00:06:31.703 14:26:14 -- event/cpu_locks.sh@15 -- # killprocess 689678 00:06:31.703 14:26:14 -- common/autotest_common.sh@926 -- # '[' -z 689678 ']' 00:06:31.703 14:26:14 -- common/autotest_common.sh@930 -- # kill -0 689678 00:06:31.703 14:26:14 -- common/autotest_common.sh@931 -- # uname 00:06:31.703 14:26:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:31.703 14:26:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 689678 00:06:31.703 14:26:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:31.703 14:26:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:31.703 14:26:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 689678' 00:06:31.703 killing process with pid 689678 00:06:31.703 14:26:14 -- common/autotest_common.sh@945 -- # kill 689678 00:06:31.703 14:26:14 -- common/autotest_common.sh@950 -- # wait 689678 00:06:31.962 14:26:14 -- event/cpu_locks.sh@16 -- # [[ -z 689859 ]] 00:06:31.962 14:26:14 -- event/cpu_locks.sh@16 -- # killprocess 689859 00:06:31.962 14:26:14 -- common/autotest_common.sh@926 -- # '[' -z 689859 ']' 00:06:31.962 14:26:14 -- common/autotest_common.sh@930 -- # kill -0 689859 00:06:31.962 14:26:14 -- common/autotest_common.sh@931 -- # uname 00:06:31.962 14:26:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:31.962 14:26:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 689859 00:06:32.222 14:26:14 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:32.222 14:26:14 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:32.222 14:26:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 689859' 00:06:32.222 killing process with pid 689859 00:06:32.222 14:26:14 -- common/autotest_common.sh@945 -- # kill 689859 00:06:32.222 14:26:14 -- common/autotest_common.sh@950 -- # wait 689859 00:06:32.481 14:26:14 -- event/cpu_locks.sh@18 -- # rm -f 00:06:32.481 14:26:14 -- event/cpu_locks.sh@1 -- # cleanup 00:06:32.481 14:26:14 -- event/cpu_locks.sh@15 -- # [[ -z 689678 ]] 00:06:32.481 14:26:14 -- event/cpu_locks.sh@15 -- # killprocess 689678 00:06:32.481 14:26:14 -- common/autotest_common.sh@926 -- # '[' -z 689678 ']' 00:06:32.481 14:26:14 -- common/autotest_common.sh@930 -- # kill -0 689678 00:06:32.481 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (689678) - No such process 00:06:32.481 14:26:14 -- common/autotest_common.sh@953 -- # echo 'Process with pid 689678 is not found' 00:06:32.481 Process with pid 689678 is not found 00:06:32.481 14:26:14 -- event/cpu_locks.sh@16 -- # [[ -z 689859 ]] 00:06:32.481 14:26:14 -- event/cpu_locks.sh@16 -- # killprocess 689859 00:06:32.481 14:26:14 -- common/autotest_common.sh@926 -- # '[' -z 689859 ']' 00:06:32.481 14:26:14 -- common/autotest_common.sh@930 -- # kill -0 689859 00:06:32.481 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (689859) - No such process 00:06:32.481 14:26:14 -- common/autotest_common.sh@953 -- # echo 'Process with pid 689859 is not found' 00:06:32.481 Process with pid 689859 is not found 00:06:32.481 14:26:14 -- event/cpu_locks.sh@18 -- # rm -f 00:06:32.481 00:06:32.481 real 0m18.784s 00:06:32.481 user 0m31.598s 00:06:32.481 sys 0m6.108s 00:06:32.481 14:26:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.481 14:26:14 -- common/autotest_common.sh@10 -- # set +x 00:06:32.481 ************************************ 00:06:32.481 END TEST cpu_locks 00:06:32.481 ************************************ 00:06:32.481 00:06:32.481 real 0m44.794s 00:06:32.481 user 1m24.241s 00:06:32.481 sys 0m10.331s 00:06:32.481 14:26:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.482 14:26:14 -- common/autotest_common.sh@10 -- # set +x 00:06:32.482 ************************************ 00:06:32.482 END TEST event 00:06:32.482 ************************************ 00:06:32.482 14:26:14 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:32.482 14:26:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:32.482 14:26:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.482 14:26:14 -- common/autotest_common.sh@10 -- # set +x 00:06:32.482 ************************************ 00:06:32.482 START TEST thread 00:06:32.482 ************************************ 00:06:32.482 14:26:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:32.741 * Looking for test storage... 00:06:32.741 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:06:32.741 14:26:15 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:32.741 14:26:15 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:32.741 14:26:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.741 14:26:15 -- common/autotest_common.sh@10 -- # set +x 00:06:32.741 ************************************ 00:06:32.741 START TEST thread_poller_perf 00:06:32.741 ************************************ 00:06:32.741 14:26:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:32.741 [2024-10-01 14:26:15.094070] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:32.741 [2024-10-01 14:26:15.094140] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid690301 ] 00:06:32.741 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.741 [2024-10-01 14:26:15.164760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.741 [2024-10-01 14:26:15.242711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.741 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:34.121 ====================================== 00:06:34.121 busy:2306156428 (cyc) 00:06:34.121 total_run_count: 778000 00:06:34.121 tsc_hz: 2300000000 (cyc) 00:06:34.121 ====================================== 00:06:34.121 poller_cost: 2964 (cyc), 1288 (nsec) 00:06:34.121 00:06:34.121 real 0m1.237s 00:06:34.121 user 0m1.145s 00:06:34.121 sys 0m0.087s 00:06:34.121 14:26:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.121 14:26:16 -- common/autotest_common.sh@10 -- # set +x 00:06:34.121 ************************************ 00:06:34.121 END TEST thread_poller_perf 00:06:34.121 ************************************ 00:06:34.121 14:26:16 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:34.121 14:26:16 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:34.121 14:26:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:34.121 14:26:16 -- common/autotest_common.sh@10 -- # set +x 00:06:34.121 ************************************ 00:06:34.121 START TEST thread_poller_perf 00:06:34.121 ************************************ 00:06:34.121 14:26:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:34.121 [2024-10-01 14:26:16.382846] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:34.121 [2024-10-01 14:26:16.382952] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid690498 ] 00:06:34.121 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.121 [2024-10-01 14:26:16.460141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.121 [2024-10-01 14:26:16.537988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.121 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:35.499 ====================================== 00:06:35.499 busy:2301748362 (cyc) 00:06:35.499 total_run_count: 13296000 00:06:35.499 tsc_hz: 2300000000 (cyc) 00:06:35.499 ====================================== 00:06:35.499 poller_cost: 173 (cyc), 75 (nsec) 00:06:35.499 00:06:35.499 real 0m1.248s 00:06:35.499 user 0m1.152s 00:06:35.499 sys 0m0.092s 00:06:35.499 14:26:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.499 14:26:17 -- common/autotest_common.sh@10 -- # set +x 00:06:35.499 ************************************ 00:06:35.499 END TEST thread_poller_perf 00:06:35.499 ************************************ 00:06:35.499 14:26:17 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:06:35.499 14:26:17 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:35.499 14:26:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:35.499 14:26:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.499 14:26:17 -- common/autotest_common.sh@10 -- # set +x 00:06:35.499 ************************************ 00:06:35.499 START TEST thread_spdk_lock 00:06:35.499 ************************************ 00:06:35.499 14:26:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:35.499 [2024-10-01 14:26:17.666742] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:35.499 [2024-10-01 14:26:17.666830] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid690662 ] 00:06:35.499 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.499 [2024-10-01 14:26:17.742254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.500 [2024-10-01 14:26:17.820165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.500 [2024-10-01 14:26:17.820167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.068 [2024-10-01 14:26:18.305332] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:36.068 [2024-10-01 14:26:18.305369] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:36.068 [2024-10-01 14:26:18.305395] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x1483c80 00:06:36.068 [2024-10-01 14:26:18.306257] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:36.068 [2024-10-01 14:26:18.306361] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:36.068 [2024-10-01 14:26:18.306380] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:36.068 Starting test contend 00:06:36.068 Worker Delay Wait us Hold us Total us 00:06:36.068 0 3 167042 184296 351338 00:06:36.068 1 5 82803 284401 367204 00:06:36.068 PASS test contend 00:06:36.068 Starting test hold_by_poller 00:06:36.068 PASS test hold_by_poller 00:06:36.068 Starting test hold_by_message 00:06:36.068 PASS test hold_by_message 00:06:36.068 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:06:36.068 100014 assertions passed 00:06:36.068 0 assertions failed 00:06:36.068 00:06:36.068 real 0m0.728s 00:06:36.068 user 0m1.118s 00:06:36.068 sys 0m0.093s 00:06:36.068 14:26:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.068 14:26:18 -- common/autotest_common.sh@10 -- # set +x 00:06:36.068 ************************************ 00:06:36.068 END TEST thread_spdk_lock 00:06:36.068 ************************************ 00:06:36.068 00:06:36.068 real 0m3.437s 00:06:36.068 user 0m3.498s 00:06:36.068 sys 0m0.445s 00:06:36.068 14:26:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.068 14:26:18 -- common/autotest_common.sh@10 -- # set +x 00:06:36.068 ************************************ 00:06:36.068 END TEST thread 00:06:36.068 ************************************ 00:06:36.068 14:26:18 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:06:36.068 14:26:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:36.068 14:26:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:36.068 14:26:18 -- common/autotest_common.sh@10 -- # set +x 00:06:36.068 ************************************ 00:06:36.068 START TEST accel 00:06:36.068 ************************************ 00:06:36.068 14:26:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:06:36.068 * Looking for test storage... 00:06:36.068 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:06:36.068 14:26:18 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:36.068 14:26:18 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:36.068 14:26:18 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:36.068 14:26:18 -- accel/accel.sh@59 -- # spdk_tgt_pid=690766 00:06:36.068 14:26:18 -- accel/accel.sh@60 -- # waitforlisten 690766 00:06:36.068 14:26:18 -- common/autotest_common.sh@819 -- # '[' -z 690766 ']' 00:06:36.068 14:26:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.068 14:26:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:36.068 14:26:18 -- accel/accel.sh@58 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:36.068 14:26:18 -- accel/accel.sh@58 -- # build_accel_config 00:06:36.068 14:26:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.068 14:26:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:36.068 14:26:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.068 14:26:18 -- common/autotest_common.sh@10 -- # set +x 00:06:36.068 14:26:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.068 14:26:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.068 14:26:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.068 14:26:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.068 14:26:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.068 14:26:18 -- accel/accel.sh@42 -- # jq -r . 00:06:36.068 [2024-10-01 14:26:18.584379] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:36.068 [2024-10-01 14:26:18.584479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid690766 ] 00:06:36.326 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.326 [2024-10-01 14:26:18.661234] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.326 [2024-10-01 14:26:18.740986] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:36.326 [2024-10-01 14:26:18.741117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.261 14:26:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:37.261 14:26:19 -- common/autotest_common.sh@852 -- # return 0 00:06:37.261 14:26:19 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:37.261 14:26:19 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:37.261 14:26:19 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:37.261 14:26:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:37.261 14:26:19 -- common/autotest_common.sh@10 -- # set +x 00:06:37.261 14:26:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:37.261 14:26:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.261 14:26:19 -- accel/accel.sh@64 -- # IFS== 00:06:37.261 14:26:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:37.261 14:26:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:37.261 14:26:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.261 14:26:19 -- accel/accel.sh@64 -- # IFS== 00:06:37.261 14:26:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:37.261 14:26:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:37.261 14:26:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.261 14:26:19 -- accel/accel.sh@64 -- # IFS== 00:06:37.261 14:26:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:37.261 14:26:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:37.261 14:26:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.261 14:26:19 -- accel/accel.sh@64 -- # IFS== 00:06:37.261 14:26:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:37.261 14:26:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:37.261 14:26:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.261 14:26:19 -- accel/accel.sh@64 -- # IFS== 00:06:37.261 14:26:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:37.261 14:26:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:37.261 14:26:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.261 14:26:19 -- accel/accel.sh@64 -- # IFS== 00:06:37.261 14:26:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:37.261 14:26:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:37.261 14:26:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.261 14:26:19 -- accel/accel.sh@64 -- # IFS== 00:06:37.261 14:26:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:37.261 14:26:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:37.261 14:26:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.261 14:26:19 -- accel/accel.sh@64 -- # IFS== 00:06:37.261 14:26:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:37.261 14:26:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:37.261 14:26:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.261 14:26:19 -- accel/accel.sh@64 -- # IFS== 00:06:37.261 14:26:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:37.261 14:26:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:37.261 14:26:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.261 14:26:19 -- accel/accel.sh@64 -- # IFS== 00:06:37.261 14:26:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:37.262 14:26:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:37.262 14:26:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.262 14:26:19 -- accel/accel.sh@64 -- # IFS== 00:06:37.262 14:26:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:37.262 14:26:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:37.262 14:26:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.262 14:26:19 -- accel/accel.sh@64 -- # IFS== 00:06:37.262 14:26:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:37.262 14:26:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:37.262 14:26:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.262 14:26:19 -- accel/accel.sh@64 -- # IFS== 00:06:37.262 14:26:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:37.262 14:26:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:37.262 14:26:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.262 14:26:19 -- accel/accel.sh@64 -- # IFS== 00:06:37.262 14:26:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:37.262 14:26:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:37.262 14:26:19 -- accel/accel.sh@67 -- # killprocess 690766 00:06:37.262 14:26:19 -- common/autotest_common.sh@926 -- # '[' -z 690766 ']' 00:06:37.262 14:26:19 -- common/autotest_common.sh@930 -- # kill -0 690766 00:06:37.262 14:26:19 -- common/autotest_common.sh@931 -- # uname 00:06:37.262 14:26:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:37.262 14:26:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 690766 00:06:37.262 14:26:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:37.262 14:26:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:37.262 14:26:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 690766' 00:06:37.262 killing process with pid 690766 00:06:37.262 14:26:19 -- common/autotest_common.sh@945 -- # kill 690766 00:06:37.262 14:26:19 -- common/autotest_common.sh@950 -- # wait 690766 00:06:37.519 14:26:19 -- accel/accel.sh@68 -- # trap - ERR 00:06:37.519 14:26:19 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:37.519 14:26:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:37.519 14:26:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.519 14:26:19 -- common/autotest_common.sh@10 -- # set +x 00:06:37.519 14:26:19 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:37.519 14:26:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:37.519 14:26:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.519 14:26:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.519 14:26:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.519 14:26:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.519 14:26:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.519 14:26:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.519 14:26:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.519 14:26:19 -- accel/accel.sh@42 -- # jq -r . 00:06:37.519 14:26:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.519 14:26:19 -- common/autotest_common.sh@10 -- # set +x 00:06:37.519 14:26:19 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:37.519 14:26:19 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:37.519 14:26:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.519 14:26:19 -- common/autotest_common.sh@10 -- # set +x 00:06:37.519 ************************************ 00:06:37.519 START TEST accel_missing_filename 00:06:37.519 ************************************ 00:06:37.519 14:26:19 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:37.519 14:26:19 -- common/autotest_common.sh@640 -- # local es=0 00:06:37.519 14:26:19 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:37.519 14:26:19 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:37.520 14:26:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:37.520 14:26:19 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:37.520 14:26:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:37.520 14:26:19 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:37.520 14:26:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:37.520 14:26:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.520 14:26:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.520 14:26:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.520 14:26:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.520 14:26:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.520 14:26:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.520 14:26:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.520 14:26:19 -- accel/accel.sh@42 -- # jq -r . 00:06:37.520 [2024-10-01 14:26:19.961254] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:37.520 [2024-10-01 14:26:19.961350] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid690982 ] 00:06:37.520 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.778 [2024-10-01 14:26:20.045002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.778 [2024-10-01 14:26:20.135584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.778 [2024-10-01 14:26:20.175137] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:37.778 [2024-10-01 14:26:20.234235] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:38.038 A filename is required. 00:06:38.038 14:26:20 -- common/autotest_common.sh@643 -- # es=234 00:06:38.038 14:26:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:38.038 14:26:20 -- common/autotest_common.sh@652 -- # es=106 00:06:38.038 14:26:20 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:38.038 14:26:20 -- common/autotest_common.sh@660 -- # es=1 00:06:38.038 14:26:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:38.038 00:06:38.038 real 0m0.373s 00:06:38.038 user 0m0.263s 00:06:38.038 sys 0m0.149s 00:06:38.038 14:26:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.038 14:26:20 -- common/autotest_common.sh@10 -- # set +x 00:06:38.038 ************************************ 00:06:38.038 END TEST accel_missing_filename 00:06:38.038 ************************************ 00:06:38.038 14:26:20 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:38.038 14:26:20 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:38.038 14:26:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.038 14:26:20 -- common/autotest_common.sh@10 -- # set +x 00:06:38.038 ************************************ 00:06:38.038 START TEST accel_compress_verify 00:06:38.038 ************************************ 00:06:38.038 14:26:20 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:38.038 14:26:20 -- common/autotest_common.sh@640 -- # local es=0 00:06:38.038 14:26:20 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:38.038 14:26:20 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:38.038 14:26:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:38.038 14:26:20 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:38.038 14:26:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:38.038 14:26:20 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:38.038 14:26:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:38.038 14:26:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.038 14:26:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.038 14:26:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.038 14:26:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.038 14:26:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.038 14:26:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.038 14:26:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.038 14:26:20 -- accel/accel.sh@42 -- # jq -r . 00:06:38.038 [2024-10-01 14:26:20.381260] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:38.038 [2024-10-01 14:26:20.381365] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid691167 ] 00:06:38.038 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.038 [2024-10-01 14:26:20.460341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.038 [2024-10-01 14:26:20.547956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.298 [2024-10-01 14:26:20.594715] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:38.298 [2024-10-01 14:26:20.663844] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:38.298 00:06:38.298 Compression does not support the verify option, aborting. 00:06:38.298 14:26:20 -- common/autotest_common.sh@643 -- # es=161 00:06:38.298 14:26:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:38.298 14:26:20 -- common/autotest_common.sh@652 -- # es=33 00:06:38.298 14:26:20 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:38.298 14:26:20 -- common/autotest_common.sh@660 -- # es=1 00:06:38.298 14:26:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:38.298 00:06:38.298 real 0m0.382s 00:06:38.298 user 0m0.273s 00:06:38.298 sys 0m0.144s 00:06:38.298 14:26:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.298 14:26:20 -- common/autotest_common.sh@10 -- # set +x 00:06:38.298 ************************************ 00:06:38.298 END TEST accel_compress_verify 00:06:38.298 ************************************ 00:06:38.298 14:26:20 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:38.298 14:26:20 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:38.298 14:26:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.298 14:26:20 -- common/autotest_common.sh@10 -- # set +x 00:06:38.298 ************************************ 00:06:38.298 START TEST accel_wrong_workload 00:06:38.298 ************************************ 00:06:38.298 14:26:20 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:38.298 14:26:20 -- common/autotest_common.sh@640 -- # local es=0 00:06:38.298 14:26:20 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:38.298 14:26:20 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:38.298 14:26:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:38.298 14:26:20 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:38.298 14:26:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:38.298 14:26:20 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:38.298 14:26:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:38.298 14:26:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.298 14:26:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.298 14:26:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.298 14:26:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.298 14:26:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.298 14:26:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.298 14:26:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.298 14:26:20 -- accel/accel.sh@42 -- # jq -r . 00:06:38.298 Unsupported workload type: foobar 00:06:38.298 [2024-10-01 14:26:20.810919] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:38.298 accel_perf options: 00:06:38.298 [-h help message] 00:06:38.298 [-q queue depth per core] 00:06:38.298 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:38.298 [-T number of threads per core 00:06:38.298 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:38.298 [-t time in seconds] 00:06:38.298 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:38.298 [ dif_verify, , dif_generate, dif_generate_copy 00:06:38.298 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:38.298 [-l for compress/decompress workloads, name of uncompressed input file 00:06:38.298 [-S for crc32c workload, use this seed value (default 0) 00:06:38.298 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:38.298 [-f for fill workload, use this BYTE value (default 255) 00:06:38.298 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:38.298 [-y verify result if this switch is on] 00:06:38.298 [-a tasks to allocate per core (default: same value as -q)] 00:06:38.298 Can be used to spread operations across a wider range of memory. 00:06:38.298 14:26:20 -- common/autotest_common.sh@643 -- # es=1 00:06:38.298 14:26:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:38.298 14:26:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:38.298 14:26:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:38.298 00:06:38.298 real 0m0.029s 00:06:38.298 user 0m0.012s 00:06:38.298 sys 0m0.017s 00:06:38.298 14:26:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.298 14:26:20 -- common/autotest_common.sh@10 -- # set +x 00:06:38.298 ************************************ 00:06:38.298 END TEST accel_wrong_workload 00:06:38.298 ************************************ 00:06:38.558 Error: writing output failed: Broken pipe 00:06:38.558 14:26:20 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:38.558 14:26:20 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:38.558 14:26:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.558 14:26:20 -- common/autotest_common.sh@10 -- # set +x 00:06:38.558 ************************************ 00:06:38.558 START TEST accel_negative_buffers 00:06:38.558 ************************************ 00:06:38.558 14:26:20 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:38.558 14:26:20 -- common/autotest_common.sh@640 -- # local es=0 00:06:38.558 14:26:20 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:38.558 14:26:20 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:38.558 14:26:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:38.558 14:26:20 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:38.558 14:26:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:38.558 14:26:20 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:38.558 14:26:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:38.558 14:26:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.558 14:26:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.558 14:26:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.558 14:26:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.558 14:26:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.558 14:26:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.558 14:26:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.558 14:26:20 -- accel/accel.sh@42 -- # jq -r . 00:06:38.558 -x option must be non-negative. 00:06:38.558 [2024-10-01 14:26:20.884738] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:38.558 accel_perf options: 00:06:38.558 [-h help message] 00:06:38.558 [-q queue depth per core] 00:06:38.558 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:38.558 [-T number of threads per core 00:06:38.558 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:38.558 [-t time in seconds] 00:06:38.559 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:38.559 [ dif_verify, , dif_generate, dif_generate_copy 00:06:38.559 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:38.559 [-l for compress/decompress workloads, name of uncompressed input file 00:06:38.559 [-S for crc32c workload, use this seed value (default 0) 00:06:38.559 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:38.559 [-f for fill workload, use this BYTE value (default 255) 00:06:38.559 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:38.559 [-y verify result if this switch is on] 00:06:38.559 [-a tasks to allocate per core (default: same value as -q)] 00:06:38.559 Can be used to spread operations across a wider range of memory. 00:06:38.559 14:26:20 -- common/autotest_common.sh@643 -- # es=1 00:06:38.559 14:26:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:38.559 14:26:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:38.559 14:26:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:38.559 00:06:38.559 real 0m0.029s 00:06:38.559 user 0m0.009s 00:06:38.559 sys 0m0.020s 00:06:38.559 14:26:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.559 14:26:20 -- common/autotest_common.sh@10 -- # set +x 00:06:38.559 ************************************ 00:06:38.559 END TEST accel_negative_buffers 00:06:38.559 ************************************ 00:06:38.559 Error: writing output failed: Broken pipe 00:06:38.559 14:26:20 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:38.559 14:26:20 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:38.559 14:26:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.559 14:26:20 -- common/autotest_common.sh@10 -- # set +x 00:06:38.559 ************************************ 00:06:38.559 START TEST accel_crc32c 00:06:38.559 ************************************ 00:06:38.559 14:26:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:38.559 14:26:20 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.559 14:26:20 -- accel/accel.sh@17 -- # local accel_module 00:06:38.559 14:26:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:38.559 14:26:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:38.559 14:26:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.559 14:26:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.559 14:26:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.559 14:26:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.559 14:26:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.559 14:26:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.559 14:26:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.559 14:26:20 -- accel/accel.sh@42 -- # jq -r . 00:06:38.559 [2024-10-01 14:26:20.962908] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:38.559 [2024-10-01 14:26:20.962999] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid691231 ] 00:06:38.559 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.559 [2024-10-01 14:26:21.041706] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.818 [2024-10-01 14:26:21.129385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.198 14:26:22 -- accel/accel.sh@18 -- # out=' 00:06:40.198 SPDK Configuration: 00:06:40.198 Core mask: 0x1 00:06:40.198 00:06:40.198 Accel Perf Configuration: 00:06:40.198 Workload Type: crc32c 00:06:40.198 CRC-32C seed: 32 00:06:40.198 Transfer size: 4096 bytes 00:06:40.198 Vector count 1 00:06:40.198 Module: software 00:06:40.198 Queue depth: 32 00:06:40.198 Allocate depth: 32 00:06:40.198 # threads/core: 1 00:06:40.198 Run time: 1 seconds 00:06:40.198 Verify: Yes 00:06:40.198 00:06:40.198 Running for 1 seconds... 00:06:40.198 00:06:40.198 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:40.198 ------------------------------------------------------------------------------------ 00:06:40.198 0,0 834400/s 3259 MiB/s 0 0 00:06:40.198 ==================================================================================== 00:06:40.198 Total 834400/s 3259 MiB/s 0 0' 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:40.198 14:26:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:40.198 14:26:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:40.198 14:26:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.198 14:26:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.198 14:26:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.198 14:26:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.198 14:26:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.198 14:26:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.198 14:26:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.198 14:26:22 -- accel/accel.sh@42 -- # jq -r . 00:06:40.198 [2024-10-01 14:26:22.344851] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:40.198 [2024-10-01 14:26:22.344948] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid691409 ] 00:06:40.198 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.198 [2024-10-01 14:26:22.421048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.198 [2024-10-01 14:26:22.501265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.198 14:26:22 -- accel/accel.sh@21 -- # val= 00:06:40.198 14:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:40.198 14:26:22 -- accel/accel.sh@21 -- # val= 00:06:40.198 14:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:40.198 14:26:22 -- accel/accel.sh@21 -- # val=0x1 00:06:40.198 14:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:40.198 14:26:22 -- accel/accel.sh@21 -- # val= 00:06:40.198 14:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:40.198 14:26:22 -- accel/accel.sh@21 -- # val= 00:06:40.198 14:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:40.198 14:26:22 -- accel/accel.sh@21 -- # val=crc32c 00:06:40.198 14:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.198 14:26:22 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:40.198 14:26:22 -- accel/accel.sh@21 -- # val=32 00:06:40.198 14:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:40.198 14:26:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.198 14:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:40.198 14:26:22 -- accel/accel.sh@21 -- # val= 00:06:40.198 14:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:40.198 14:26:22 -- accel/accel.sh@21 -- # val=software 00:06:40.198 14:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.198 14:26:22 -- accel/accel.sh@23 -- # accel_module=software 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:40.198 14:26:22 -- accel/accel.sh@21 -- # val=32 00:06:40.198 14:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:40.198 14:26:22 -- accel/accel.sh@21 -- # val=32 00:06:40.198 14:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:40.198 14:26:22 -- accel/accel.sh@21 -- # val=1 00:06:40.198 14:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:40.198 14:26:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:40.198 14:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:40.198 14:26:22 -- accel/accel.sh@21 -- # val=Yes 00:06:40.198 14:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:40.198 14:26:22 -- accel/accel.sh@21 -- # val= 00:06:40.198 14:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:40.198 14:26:22 -- accel/accel.sh@21 -- # val= 00:06:40.198 14:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:40.198 14:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:41.577 14:26:23 -- accel/accel.sh@21 -- # val= 00:06:41.577 14:26:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.577 14:26:23 -- accel/accel.sh@20 -- # IFS=: 00:06:41.577 14:26:23 -- accel/accel.sh@20 -- # read -r var val 00:06:41.577 14:26:23 -- accel/accel.sh@21 -- # val= 00:06:41.577 14:26:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.577 14:26:23 -- accel/accel.sh@20 -- # IFS=: 00:06:41.577 14:26:23 -- accel/accel.sh@20 -- # read -r var val 00:06:41.577 14:26:23 -- accel/accel.sh@21 -- # val= 00:06:41.577 14:26:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.577 14:26:23 -- accel/accel.sh@20 -- # IFS=: 00:06:41.577 14:26:23 -- accel/accel.sh@20 -- # read -r var val 00:06:41.577 14:26:23 -- accel/accel.sh@21 -- # val= 00:06:41.577 14:26:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.577 14:26:23 -- accel/accel.sh@20 -- # IFS=: 00:06:41.577 14:26:23 -- accel/accel.sh@20 -- # read -r var val 00:06:41.577 14:26:23 -- accel/accel.sh@21 -- # val= 00:06:41.577 14:26:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.577 14:26:23 -- accel/accel.sh@20 -- # IFS=: 00:06:41.577 14:26:23 -- accel/accel.sh@20 -- # read -r var val 00:06:41.577 14:26:23 -- accel/accel.sh@21 -- # val= 00:06:41.577 14:26:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.577 14:26:23 -- accel/accel.sh@20 -- # IFS=: 00:06:41.577 14:26:23 -- accel/accel.sh@20 -- # read -r var val 00:06:41.577 14:26:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:41.577 14:26:23 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:41.577 14:26:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.577 00:06:41.578 real 0m2.759s 00:06:41.578 user 0m2.472s 00:06:41.578 sys 0m0.292s 00:06:41.578 14:26:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.578 14:26:23 -- common/autotest_common.sh@10 -- # set +x 00:06:41.578 ************************************ 00:06:41.578 END TEST accel_crc32c 00:06:41.578 ************************************ 00:06:41.578 14:26:23 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:41.578 14:26:23 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:41.578 14:26:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.578 14:26:23 -- common/autotest_common.sh@10 -- # set +x 00:06:41.578 ************************************ 00:06:41.578 START TEST accel_crc32c_C2 00:06:41.578 ************************************ 00:06:41.578 14:26:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:41.578 14:26:23 -- accel/accel.sh@16 -- # local accel_opc 00:06:41.578 14:26:23 -- accel/accel.sh@17 -- # local accel_module 00:06:41.578 14:26:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:41.578 14:26:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:41.578 14:26:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.578 14:26:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.578 14:26:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.578 14:26:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.578 14:26:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.578 14:26:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.578 14:26:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.578 14:26:23 -- accel/accel.sh@42 -- # jq -r . 00:06:41.578 [2024-10-01 14:26:23.769260] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:41.578 [2024-10-01 14:26:23.769346] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid691606 ] 00:06:41.578 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.578 [2024-10-01 14:26:23.844248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.578 [2024-10-01 14:26:23.922631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.958 14:26:25 -- accel/accel.sh@18 -- # out=' 00:06:42.958 SPDK Configuration: 00:06:42.958 Core mask: 0x1 00:06:42.958 00:06:42.958 Accel Perf Configuration: 00:06:42.958 Workload Type: crc32c 00:06:42.958 CRC-32C seed: 0 00:06:42.958 Transfer size: 4096 bytes 00:06:42.958 Vector count 2 00:06:42.958 Module: software 00:06:42.958 Queue depth: 32 00:06:42.958 Allocate depth: 32 00:06:42.958 # threads/core: 1 00:06:42.958 Run time: 1 seconds 00:06:42.958 Verify: Yes 00:06:42.958 00:06:42.958 Running for 1 seconds... 00:06:42.958 00:06:42.958 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:42.958 ------------------------------------------------------------------------------------ 00:06:42.958 0,0 604032/s 4719 MiB/s 0 0 00:06:42.958 ==================================================================================== 00:06:42.958 Total 604032/s 2359 MiB/s 0 0' 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:42.958 14:26:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:42.958 14:26:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:42.958 14:26:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.958 14:26:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.958 14:26:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.958 14:26:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.958 14:26:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.958 14:26:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.958 14:26:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.958 14:26:25 -- accel/accel.sh@42 -- # jq -r . 00:06:42.958 [2024-10-01 14:26:25.121392] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:42.958 [2024-10-01 14:26:25.121480] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid691795 ] 00:06:42.958 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.958 [2024-10-01 14:26:25.195587] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.958 [2024-10-01 14:26:25.274550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.958 14:26:25 -- accel/accel.sh@21 -- # val= 00:06:42.958 14:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:42.958 14:26:25 -- accel/accel.sh@21 -- # val= 00:06:42.958 14:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:42.958 14:26:25 -- accel/accel.sh@21 -- # val=0x1 00:06:42.958 14:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:42.958 14:26:25 -- accel/accel.sh@21 -- # val= 00:06:42.958 14:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:42.958 14:26:25 -- accel/accel.sh@21 -- # val= 00:06:42.958 14:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:42.958 14:26:25 -- accel/accel.sh@21 -- # val=crc32c 00:06:42.958 14:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.958 14:26:25 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:42.958 14:26:25 -- accel/accel.sh@21 -- # val=0 00:06:42.958 14:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:42.958 14:26:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:42.958 14:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:42.958 14:26:25 -- accel/accel.sh@21 -- # val= 00:06:42.958 14:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:42.958 14:26:25 -- accel/accel.sh@21 -- # val=software 00:06:42.958 14:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.958 14:26:25 -- accel/accel.sh@23 -- # accel_module=software 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:42.958 14:26:25 -- accel/accel.sh@21 -- # val=32 00:06:42.958 14:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:42.958 14:26:25 -- accel/accel.sh@21 -- # val=32 00:06:42.958 14:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:42.958 14:26:25 -- accel/accel.sh@21 -- # val=1 00:06:42.958 14:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:42.958 14:26:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:42.958 14:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:42.958 14:26:25 -- accel/accel.sh@21 -- # val=Yes 00:06:42.958 14:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:42.958 14:26:25 -- accel/accel.sh@21 -- # val= 00:06:42.958 14:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:42.958 14:26:25 -- accel/accel.sh@21 -- # val= 00:06:42.958 14:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:42.958 14:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:44.340 14:26:26 -- accel/accel.sh@21 -- # val= 00:06:44.340 14:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.340 14:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:44.340 14:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:44.340 14:26:26 -- accel/accel.sh@21 -- # val= 00:06:44.340 14:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.340 14:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:44.340 14:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:44.340 14:26:26 -- accel/accel.sh@21 -- # val= 00:06:44.340 14:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.340 14:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:44.340 14:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:44.340 14:26:26 -- accel/accel.sh@21 -- # val= 00:06:44.340 14:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.340 14:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:44.340 14:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:44.340 14:26:26 -- accel/accel.sh@21 -- # val= 00:06:44.340 14:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.340 14:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:44.340 14:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:44.340 14:26:26 -- accel/accel.sh@21 -- # val= 00:06:44.340 14:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.340 14:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:44.340 14:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:44.340 14:26:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:44.340 14:26:26 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:44.340 14:26:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.340 00:06:44.340 real 0m2.716s 00:06:44.340 user 0m2.445s 00:06:44.340 sys 0m0.277s 00:06:44.340 14:26:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.340 14:26:26 -- common/autotest_common.sh@10 -- # set +x 00:06:44.340 ************************************ 00:06:44.340 END TEST accel_crc32c_C2 00:06:44.340 ************************************ 00:06:44.340 14:26:26 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:44.340 14:26:26 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:44.340 14:26:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.340 14:26:26 -- common/autotest_common.sh@10 -- # set +x 00:06:44.340 ************************************ 00:06:44.340 START TEST accel_copy 00:06:44.340 ************************************ 00:06:44.340 14:26:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:44.340 14:26:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.341 14:26:26 -- accel/accel.sh@17 -- # local accel_module 00:06:44.341 14:26:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:44.341 14:26:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.341 14:26:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:44.341 14:26:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.341 14:26:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.341 14:26:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.341 14:26:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.341 14:26:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.341 14:26:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.341 14:26:26 -- accel/accel.sh@42 -- # jq -r . 00:06:44.341 [2024-10-01 14:26:26.533940] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:44.341 [2024-10-01 14:26:26.534027] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid691989 ] 00:06:44.341 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.341 [2024-10-01 14:26:26.609581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.341 [2024-10-01 14:26:26.690164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.722 14:26:27 -- accel/accel.sh@18 -- # out=' 00:06:45.722 SPDK Configuration: 00:06:45.722 Core mask: 0x1 00:06:45.722 00:06:45.722 Accel Perf Configuration: 00:06:45.722 Workload Type: copy 00:06:45.722 Transfer size: 4096 bytes 00:06:45.722 Vector count 1 00:06:45.722 Module: software 00:06:45.722 Queue depth: 32 00:06:45.722 Allocate depth: 32 00:06:45.722 # threads/core: 1 00:06:45.722 Run time: 1 seconds 00:06:45.722 Verify: Yes 00:06:45.722 00:06:45.722 Running for 1 seconds... 00:06:45.722 00:06:45.722 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:45.722 ------------------------------------------------------------------------------------ 00:06:45.722 0,0 535712/s 2092 MiB/s 0 0 00:06:45.722 ==================================================================================== 00:06:45.722 Total 535712/s 2092 MiB/s 0 0' 00:06:45.722 14:26:27 -- accel/accel.sh@20 -- # IFS=: 00:06:45.722 14:26:27 -- accel/accel.sh@20 -- # read -r var val 00:06:45.722 14:26:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:45.722 14:26:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:45.722 14:26:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.722 14:26:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.722 14:26:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.722 14:26:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.722 14:26:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.722 14:26:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.722 14:26:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.722 14:26:27 -- accel/accel.sh@42 -- # jq -r . 00:06:45.722 [2024-10-01 14:26:27.905284] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:45.722 [2024-10-01 14:26:27.905370] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid692169 ] 00:06:45.722 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.722 [2024-10-01 14:26:27.981068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.722 [2024-10-01 14:26:28.061581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.722 14:26:28 -- accel/accel.sh@21 -- # val= 00:06:45.722 14:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.722 14:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:45.722 14:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:45.722 14:26:28 -- accel/accel.sh@21 -- # val= 00:06:45.722 14:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.722 14:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:45.722 14:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:45.722 14:26:28 -- accel/accel.sh@21 -- # val=0x1 00:06:45.722 14:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.722 14:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:45.722 14:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:45.722 14:26:28 -- accel/accel.sh@21 -- # val= 00:06:45.722 14:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.722 14:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:45.722 14:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:45.722 14:26:28 -- accel/accel.sh@21 -- # val= 00:06:45.722 14:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.722 14:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:45.722 14:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:45.722 14:26:28 -- accel/accel.sh@21 -- # val=copy 00:06:45.722 14:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.722 14:26:28 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:45.722 14:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:45.722 14:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:45.722 14:26:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:45.722 14:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.722 14:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:45.722 14:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:45.722 14:26:28 -- accel/accel.sh@21 -- # val= 00:06:45.722 14:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.722 14:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:45.722 14:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:45.722 14:26:28 -- accel/accel.sh@21 -- # val=software 00:06:45.723 14:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.723 14:26:28 -- accel/accel.sh@23 -- # accel_module=software 00:06:45.723 14:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:45.723 14:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:45.723 14:26:28 -- accel/accel.sh@21 -- # val=32 00:06:45.723 14:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.723 14:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:45.723 14:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:45.723 14:26:28 -- accel/accel.sh@21 -- # val=32 00:06:45.723 14:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.723 14:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:45.723 14:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:45.723 14:26:28 -- accel/accel.sh@21 -- # val=1 00:06:45.723 14:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.723 14:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:45.723 14:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:45.723 14:26:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:45.723 14:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.723 14:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:45.723 14:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:45.723 14:26:28 -- accel/accel.sh@21 -- # val=Yes 00:06:45.723 14:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.723 14:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:45.723 14:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:45.723 14:26:28 -- accel/accel.sh@21 -- # val= 00:06:45.723 14:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.723 14:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:45.723 14:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:45.723 14:26:28 -- accel/accel.sh@21 -- # val= 00:06:45.723 14:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.723 14:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:45.723 14:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:47.106 14:26:29 -- accel/accel.sh@21 -- # val= 00:06:47.106 14:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.106 14:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:47.106 14:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:47.106 14:26:29 -- accel/accel.sh@21 -- # val= 00:06:47.106 14:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.106 14:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:47.106 14:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:47.106 14:26:29 -- accel/accel.sh@21 -- # val= 00:06:47.106 14:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.106 14:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:47.106 14:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:47.106 14:26:29 -- accel/accel.sh@21 -- # val= 00:06:47.106 14:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.106 14:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:47.106 14:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:47.106 14:26:29 -- accel/accel.sh@21 -- # val= 00:06:47.106 14:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.106 14:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:47.106 14:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:47.106 14:26:29 -- accel/accel.sh@21 -- # val= 00:06:47.106 14:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.106 14:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:47.106 14:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:47.106 14:26:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:47.106 14:26:29 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:47.106 14:26:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.106 00:06:47.106 real 0m2.750s 00:06:47.106 user 0m2.465s 00:06:47.106 sys 0m0.291s 00:06:47.106 14:26:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.106 14:26:29 -- common/autotest_common.sh@10 -- # set +x 00:06:47.106 ************************************ 00:06:47.106 END TEST accel_copy 00:06:47.106 ************************************ 00:06:47.106 14:26:29 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:47.106 14:26:29 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:47.106 14:26:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.106 14:26:29 -- common/autotest_common.sh@10 -- # set +x 00:06:47.106 ************************************ 00:06:47.106 START TEST accel_fill 00:06:47.106 ************************************ 00:06:47.106 14:26:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:47.106 14:26:29 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.106 14:26:29 -- accel/accel.sh@17 -- # local accel_module 00:06:47.106 14:26:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:47.106 14:26:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:47.106 14:26:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.106 14:26:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.106 14:26:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.106 14:26:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.106 14:26:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.106 14:26:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.106 14:26:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.106 14:26:29 -- accel/accel.sh@42 -- # jq -r . 00:06:47.106 [2024-10-01 14:26:29.330477] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:47.106 [2024-10-01 14:26:29.330573] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid692371 ] 00:06:47.106 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.106 [2024-10-01 14:26:29.405259] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.106 [2024-10-01 14:26:29.486357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.543 14:26:30 -- accel/accel.sh@18 -- # out=' 00:06:48.543 SPDK Configuration: 00:06:48.543 Core mask: 0x1 00:06:48.543 00:06:48.543 Accel Perf Configuration: 00:06:48.543 Workload Type: fill 00:06:48.543 Fill pattern: 0x80 00:06:48.543 Transfer size: 4096 bytes 00:06:48.543 Vector count 1 00:06:48.543 Module: software 00:06:48.543 Queue depth: 64 00:06:48.543 Allocate depth: 64 00:06:48.543 # threads/core: 1 00:06:48.543 Run time: 1 seconds 00:06:48.543 Verify: Yes 00:06:48.543 00:06:48.543 Running for 1 seconds... 00:06:48.543 00:06:48.543 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:48.543 ------------------------------------------------------------------------------------ 00:06:48.543 0,0 907840/s 3546 MiB/s 0 0 00:06:48.543 ==================================================================================== 00:06:48.543 Total 907840/s 3546 MiB/s 0 0' 00:06:48.543 14:26:30 -- accel/accel.sh@20 -- # IFS=: 00:06:48.543 14:26:30 -- accel/accel.sh@20 -- # read -r var val 00:06:48.543 14:26:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:48.543 14:26:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:48.543 14:26:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.543 14:26:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.543 14:26:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.543 14:26:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.543 14:26:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.543 14:26:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.543 14:26:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.543 14:26:30 -- accel/accel.sh@42 -- # jq -r . 00:06:48.543 [2024-10-01 14:26:30.702179] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:48.543 [2024-10-01 14:26:30.702269] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid692551 ] 00:06:48.543 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.543 [2024-10-01 14:26:30.776086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.543 [2024-10-01 14:26:30.862065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.543 14:26:30 -- accel/accel.sh@21 -- # val= 00:06:48.543 14:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.543 14:26:30 -- accel/accel.sh@20 -- # IFS=: 00:06:48.543 14:26:30 -- accel/accel.sh@20 -- # read -r var val 00:06:48.543 14:26:30 -- accel/accel.sh@21 -- # val= 00:06:48.543 14:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.543 14:26:30 -- accel/accel.sh@20 -- # IFS=: 00:06:48.543 14:26:30 -- accel/accel.sh@20 -- # read -r var val 00:06:48.543 14:26:30 -- accel/accel.sh@21 -- # val=0x1 00:06:48.544 14:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # IFS=: 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # read -r var val 00:06:48.544 14:26:30 -- accel/accel.sh@21 -- # val= 00:06:48.544 14:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # IFS=: 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # read -r var val 00:06:48.544 14:26:30 -- accel/accel.sh@21 -- # val= 00:06:48.544 14:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # IFS=: 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # read -r var val 00:06:48.544 14:26:30 -- accel/accel.sh@21 -- # val=fill 00:06:48.544 14:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.544 14:26:30 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # IFS=: 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # read -r var val 00:06:48.544 14:26:30 -- accel/accel.sh@21 -- # val=0x80 00:06:48.544 14:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # IFS=: 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # read -r var val 00:06:48.544 14:26:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.544 14:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # IFS=: 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # read -r var val 00:06:48.544 14:26:30 -- accel/accel.sh@21 -- # val= 00:06:48.544 14:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # IFS=: 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # read -r var val 00:06:48.544 14:26:30 -- accel/accel.sh@21 -- # val=software 00:06:48.544 14:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.544 14:26:30 -- accel/accel.sh@23 -- # accel_module=software 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # IFS=: 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # read -r var val 00:06:48.544 14:26:30 -- accel/accel.sh@21 -- # val=64 00:06:48.544 14:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # IFS=: 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # read -r var val 00:06:48.544 14:26:30 -- accel/accel.sh@21 -- # val=64 00:06:48.544 14:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # IFS=: 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # read -r var val 00:06:48.544 14:26:30 -- accel/accel.sh@21 -- # val=1 00:06:48.544 14:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # IFS=: 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # read -r var val 00:06:48.544 14:26:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:48.544 14:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # IFS=: 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # read -r var val 00:06:48.544 14:26:30 -- accel/accel.sh@21 -- # val=Yes 00:06:48.544 14:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # IFS=: 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # read -r var val 00:06:48.544 14:26:30 -- accel/accel.sh@21 -- # val= 00:06:48.544 14:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # IFS=: 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # read -r var val 00:06:48.544 14:26:30 -- accel/accel.sh@21 -- # val= 00:06:48.544 14:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # IFS=: 00:06:48.544 14:26:30 -- accel/accel.sh@20 -- # read -r var val 00:06:49.924 14:26:32 -- accel/accel.sh@21 -- # val= 00:06:49.924 14:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.924 14:26:32 -- accel/accel.sh@20 -- # IFS=: 00:06:49.924 14:26:32 -- accel/accel.sh@20 -- # read -r var val 00:06:49.924 14:26:32 -- accel/accel.sh@21 -- # val= 00:06:49.924 14:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.924 14:26:32 -- accel/accel.sh@20 -- # IFS=: 00:06:49.924 14:26:32 -- accel/accel.sh@20 -- # read -r var val 00:06:49.924 14:26:32 -- accel/accel.sh@21 -- # val= 00:06:49.924 14:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.924 14:26:32 -- accel/accel.sh@20 -- # IFS=: 00:06:49.924 14:26:32 -- accel/accel.sh@20 -- # read -r var val 00:06:49.924 14:26:32 -- accel/accel.sh@21 -- # val= 00:06:49.924 14:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.924 14:26:32 -- accel/accel.sh@20 -- # IFS=: 00:06:49.924 14:26:32 -- accel/accel.sh@20 -- # read -r var val 00:06:49.924 14:26:32 -- accel/accel.sh@21 -- # val= 00:06:49.924 14:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.924 14:26:32 -- accel/accel.sh@20 -- # IFS=: 00:06:49.924 14:26:32 -- accel/accel.sh@20 -- # read -r var val 00:06:49.924 14:26:32 -- accel/accel.sh@21 -- # val= 00:06:49.924 14:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.924 14:26:32 -- accel/accel.sh@20 -- # IFS=: 00:06:49.924 14:26:32 -- accel/accel.sh@20 -- # read -r var val 00:06:49.924 14:26:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:49.924 14:26:32 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:49.924 14:26:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.924 00:06:49.924 real 0m2.753s 00:06:49.924 user 0m2.456s 00:06:49.924 sys 0m0.302s 00:06:49.924 14:26:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.924 14:26:32 -- common/autotest_common.sh@10 -- # set +x 00:06:49.924 ************************************ 00:06:49.924 END TEST accel_fill 00:06:49.924 ************************************ 00:06:49.924 14:26:32 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:49.924 14:26:32 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:49.924 14:26:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.924 14:26:32 -- common/autotest_common.sh@10 -- # set +x 00:06:49.924 ************************************ 00:06:49.924 START TEST accel_copy_crc32c 00:06:49.924 ************************************ 00:06:49.924 14:26:32 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:49.924 14:26:32 -- accel/accel.sh@16 -- # local accel_opc 00:06:49.924 14:26:32 -- accel/accel.sh@17 -- # local accel_module 00:06:49.924 14:26:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:49.924 14:26:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:49.924 14:26:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.924 14:26:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.924 14:26:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.924 14:26:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.924 14:26:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.924 14:26:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.924 14:26:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.924 14:26:32 -- accel/accel.sh@42 -- # jq -r . 00:06:49.924 [2024-10-01 14:26:32.133306] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:49.924 [2024-10-01 14:26:32.133395] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid692744 ] 00:06:49.924 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.924 [2024-10-01 14:26:32.210596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.924 [2024-10-01 14:26:32.291822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.306 14:26:33 -- accel/accel.sh@18 -- # out=' 00:06:51.306 SPDK Configuration: 00:06:51.306 Core mask: 0x1 00:06:51.306 00:06:51.306 Accel Perf Configuration: 00:06:51.306 Workload Type: copy_crc32c 00:06:51.306 CRC-32C seed: 0 00:06:51.306 Vector size: 4096 bytes 00:06:51.306 Transfer size: 4096 bytes 00:06:51.306 Vector count 1 00:06:51.306 Module: software 00:06:51.306 Queue depth: 32 00:06:51.306 Allocate depth: 32 00:06:51.306 # threads/core: 1 00:06:51.306 Run time: 1 seconds 00:06:51.306 Verify: Yes 00:06:51.306 00:06:51.306 Running for 1 seconds... 00:06:51.306 00:06:51.306 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:51.306 ------------------------------------------------------------------------------------ 00:06:51.306 0,0 426272/s 1665 MiB/s 0 0 00:06:51.306 ==================================================================================== 00:06:51.306 Total 426272/s 1665 MiB/s 0 0' 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:51.306 14:26:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:51.306 14:26:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:51.306 14:26:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.306 14:26:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.306 14:26:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.306 14:26:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.306 14:26:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.306 14:26:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.306 14:26:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.306 14:26:33 -- accel/accel.sh@42 -- # jq -r . 00:06:51.306 [2024-10-01 14:26:33.505150] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:51.306 [2024-10-01 14:26:33.505241] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid692934 ] 00:06:51.306 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.306 [2024-10-01 14:26:33.581522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.306 [2024-10-01 14:26:33.664676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.306 14:26:33 -- accel/accel.sh@21 -- # val= 00:06:51.306 14:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:51.306 14:26:33 -- accel/accel.sh@21 -- # val= 00:06:51.306 14:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:51.306 14:26:33 -- accel/accel.sh@21 -- # val=0x1 00:06:51.306 14:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:51.306 14:26:33 -- accel/accel.sh@21 -- # val= 00:06:51.306 14:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:51.306 14:26:33 -- accel/accel.sh@21 -- # val= 00:06:51.306 14:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:51.306 14:26:33 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:51.306 14:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.306 14:26:33 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:51.306 14:26:33 -- accel/accel.sh@21 -- # val=0 00:06:51.306 14:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:51.306 14:26:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:51.306 14:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:51.306 14:26:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:51.306 14:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:51.306 14:26:33 -- accel/accel.sh@21 -- # val= 00:06:51.306 14:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:51.306 14:26:33 -- accel/accel.sh@21 -- # val=software 00:06:51.306 14:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.306 14:26:33 -- accel/accel.sh@23 -- # accel_module=software 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:51.306 14:26:33 -- accel/accel.sh@21 -- # val=32 00:06:51.306 14:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:51.306 14:26:33 -- accel/accel.sh@21 -- # val=32 00:06:51.306 14:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:51.306 14:26:33 -- accel/accel.sh@21 -- # val=1 00:06:51.306 14:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:51.306 14:26:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:51.306 14:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:51.306 14:26:33 -- accel/accel.sh@21 -- # val=Yes 00:06:51.306 14:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:51.306 14:26:33 -- accel/accel.sh@21 -- # val= 00:06:51.306 14:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:51.306 14:26:33 -- accel/accel.sh@21 -- # val= 00:06:51.306 14:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:51.306 14:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:52.688 14:26:34 -- accel/accel.sh@21 -- # val= 00:06:52.688 14:26:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.689 14:26:34 -- accel/accel.sh@20 -- # IFS=: 00:06:52.689 14:26:34 -- accel/accel.sh@20 -- # read -r var val 00:06:52.689 14:26:34 -- accel/accel.sh@21 -- # val= 00:06:52.689 14:26:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.689 14:26:34 -- accel/accel.sh@20 -- # IFS=: 00:06:52.689 14:26:34 -- accel/accel.sh@20 -- # read -r var val 00:06:52.689 14:26:34 -- accel/accel.sh@21 -- # val= 00:06:52.689 14:26:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.689 14:26:34 -- accel/accel.sh@20 -- # IFS=: 00:06:52.689 14:26:34 -- accel/accel.sh@20 -- # read -r var val 00:06:52.689 14:26:34 -- accel/accel.sh@21 -- # val= 00:06:52.689 14:26:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.689 14:26:34 -- accel/accel.sh@20 -- # IFS=: 00:06:52.689 14:26:34 -- accel/accel.sh@20 -- # read -r var val 00:06:52.689 14:26:34 -- accel/accel.sh@21 -- # val= 00:06:52.689 14:26:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.689 14:26:34 -- accel/accel.sh@20 -- # IFS=: 00:06:52.689 14:26:34 -- accel/accel.sh@20 -- # read -r var val 00:06:52.689 14:26:34 -- accel/accel.sh@21 -- # val= 00:06:52.689 14:26:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.689 14:26:34 -- accel/accel.sh@20 -- # IFS=: 00:06:52.689 14:26:34 -- accel/accel.sh@20 -- # read -r var val 00:06:52.689 14:26:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:52.689 14:26:34 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:52.689 14:26:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.689 00:06:52.689 real 0m2.743s 00:06:52.689 user 0m2.472s 00:06:52.689 sys 0m0.279s 00:06:52.689 14:26:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.689 14:26:34 -- common/autotest_common.sh@10 -- # set +x 00:06:52.689 ************************************ 00:06:52.689 END TEST accel_copy_crc32c 00:06:52.689 ************************************ 00:06:52.689 14:26:34 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:52.689 14:26:34 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:52.689 14:26:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.689 14:26:34 -- common/autotest_common.sh@10 -- # set +x 00:06:52.689 ************************************ 00:06:52.689 START TEST accel_copy_crc32c_C2 00:06:52.689 ************************************ 00:06:52.689 14:26:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:52.689 14:26:34 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.689 14:26:34 -- accel/accel.sh@17 -- # local accel_module 00:06:52.689 14:26:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:52.689 14:26:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:52.689 14:26:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.689 14:26:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.689 14:26:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.689 14:26:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.689 14:26:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.689 14:26:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.689 14:26:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.689 14:26:34 -- accel/accel.sh@42 -- # jq -r . 00:06:52.689 [2024-10-01 14:26:34.924186] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:52.689 [2024-10-01 14:26:34.924261] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid693166 ] 00:06:52.689 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.689 [2024-10-01 14:26:34.998853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.689 [2024-10-01 14:26:35.078706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.070 14:26:36 -- accel/accel.sh@18 -- # out=' 00:06:54.070 SPDK Configuration: 00:06:54.070 Core mask: 0x1 00:06:54.070 00:06:54.070 Accel Perf Configuration: 00:06:54.070 Workload Type: copy_crc32c 00:06:54.070 CRC-32C seed: 0 00:06:54.070 Vector size: 4096 bytes 00:06:54.070 Transfer size: 8192 bytes 00:06:54.070 Vector count 2 00:06:54.070 Module: software 00:06:54.070 Queue depth: 32 00:06:54.070 Allocate depth: 32 00:06:54.070 # threads/core: 1 00:06:54.070 Run time: 1 seconds 00:06:54.070 Verify: Yes 00:06:54.070 00:06:54.070 Running for 1 seconds... 00:06:54.070 00:06:54.070 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:54.070 ------------------------------------------------------------------------------------ 00:06:54.070 0,0 295136/s 2305 MiB/s 0 0 00:06:54.070 ==================================================================================== 00:06:54.070 Total 295136/s 1152 MiB/s 0 0' 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:54.070 14:26:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:54.070 14:26:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:54.070 14:26:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.070 14:26:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.070 14:26:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.070 14:26:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.070 14:26:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.070 14:26:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.070 14:26:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.070 14:26:36 -- accel/accel.sh@42 -- # jq -r . 00:06:54.070 [2024-10-01 14:26:36.282231] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:54.070 [2024-10-01 14:26:36.282323] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid693351 ] 00:06:54.070 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.070 [2024-10-01 14:26:36.356249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.070 [2024-10-01 14:26:36.436948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.070 14:26:36 -- accel/accel.sh@21 -- # val= 00:06:54.070 14:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:54.070 14:26:36 -- accel/accel.sh@21 -- # val= 00:06:54.070 14:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:54.070 14:26:36 -- accel/accel.sh@21 -- # val=0x1 00:06:54.070 14:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:54.070 14:26:36 -- accel/accel.sh@21 -- # val= 00:06:54.070 14:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:54.070 14:26:36 -- accel/accel.sh@21 -- # val= 00:06:54.070 14:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:54.070 14:26:36 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:54.070 14:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.070 14:26:36 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:54.070 14:26:36 -- accel/accel.sh@21 -- # val=0 00:06:54.070 14:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:54.070 14:26:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.070 14:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:54.070 14:26:36 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:54.070 14:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:54.070 14:26:36 -- accel/accel.sh@21 -- # val= 00:06:54.070 14:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:54.070 14:26:36 -- accel/accel.sh@21 -- # val=software 00:06:54.070 14:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.070 14:26:36 -- accel/accel.sh@23 -- # accel_module=software 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:54.070 14:26:36 -- accel/accel.sh@21 -- # val=32 00:06:54.070 14:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:54.070 14:26:36 -- accel/accel.sh@21 -- # val=32 00:06:54.070 14:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:54.070 14:26:36 -- accel/accel.sh@21 -- # val=1 00:06:54.070 14:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:54.070 14:26:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:54.070 14:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:54.070 14:26:36 -- accel/accel.sh@21 -- # val=Yes 00:06:54.070 14:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:54.070 14:26:36 -- accel/accel.sh@21 -- # val= 00:06:54.070 14:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:54.070 14:26:36 -- accel/accel.sh@21 -- # val= 00:06:54.070 14:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:54.070 14:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:55.452 14:26:37 -- accel/accel.sh@21 -- # val= 00:06:55.452 14:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.452 14:26:37 -- accel/accel.sh@20 -- # IFS=: 00:06:55.452 14:26:37 -- accel/accel.sh@20 -- # read -r var val 00:06:55.452 14:26:37 -- accel/accel.sh@21 -- # val= 00:06:55.452 14:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.452 14:26:37 -- accel/accel.sh@20 -- # IFS=: 00:06:55.452 14:26:37 -- accel/accel.sh@20 -- # read -r var val 00:06:55.452 14:26:37 -- accel/accel.sh@21 -- # val= 00:06:55.452 14:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.452 14:26:37 -- accel/accel.sh@20 -- # IFS=: 00:06:55.452 14:26:37 -- accel/accel.sh@20 -- # read -r var val 00:06:55.452 14:26:37 -- accel/accel.sh@21 -- # val= 00:06:55.452 14:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.452 14:26:37 -- accel/accel.sh@20 -- # IFS=: 00:06:55.452 14:26:37 -- accel/accel.sh@20 -- # read -r var val 00:06:55.452 14:26:37 -- accel/accel.sh@21 -- # val= 00:06:55.452 14:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.452 14:26:37 -- accel/accel.sh@20 -- # IFS=: 00:06:55.452 14:26:37 -- accel/accel.sh@20 -- # read -r var val 00:06:55.452 14:26:37 -- accel/accel.sh@21 -- # val= 00:06:55.452 14:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.452 14:26:37 -- accel/accel.sh@20 -- # IFS=: 00:06:55.452 14:26:37 -- accel/accel.sh@20 -- # read -r var val 00:06:55.452 14:26:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:55.452 14:26:37 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:55.452 14:26:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.452 00:06:55.452 real 0m2.736s 00:06:55.452 user 0m2.444s 00:06:55.452 sys 0m0.299s 00:06:55.452 14:26:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.452 14:26:37 -- common/autotest_common.sh@10 -- # set +x 00:06:55.452 ************************************ 00:06:55.452 END TEST accel_copy_crc32c_C2 00:06:55.452 ************************************ 00:06:55.452 14:26:37 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:55.452 14:26:37 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:55.452 14:26:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.452 14:26:37 -- common/autotest_common.sh@10 -- # set +x 00:06:55.452 ************************************ 00:06:55.452 START TEST accel_dualcast 00:06:55.452 ************************************ 00:06:55.452 14:26:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:06:55.452 14:26:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.452 14:26:37 -- accel/accel.sh@17 -- # local accel_module 00:06:55.452 14:26:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:55.452 14:26:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:55.452 14:26:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.452 14:26:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.452 14:26:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.452 14:26:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.452 14:26:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.452 14:26:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.452 14:26:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.452 14:26:37 -- accel/accel.sh@42 -- # jq -r . 00:06:55.452 [2024-10-01 14:26:37.703214] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:55.452 [2024-10-01 14:26:37.703302] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid693581 ] 00:06:55.452 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.452 [2024-10-01 14:26:37.778434] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.452 [2024-10-01 14:26:37.859796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.833 14:26:39 -- accel/accel.sh@18 -- # out=' 00:06:56.833 SPDK Configuration: 00:06:56.833 Core mask: 0x1 00:06:56.833 00:06:56.833 Accel Perf Configuration: 00:06:56.833 Workload Type: dualcast 00:06:56.833 Transfer size: 4096 bytes 00:06:56.833 Vector count 1 00:06:56.833 Module: software 00:06:56.833 Queue depth: 32 00:06:56.833 Allocate depth: 32 00:06:56.833 # threads/core: 1 00:06:56.833 Run time: 1 seconds 00:06:56.833 Verify: Yes 00:06:56.833 00:06:56.833 Running for 1 seconds... 00:06:56.833 00:06:56.833 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:56.833 ------------------------------------------------------------------------------------ 00:06:56.833 0,0 619072/s 2418 MiB/s 0 0 00:06:56.833 ==================================================================================== 00:06:56.833 Total 619072/s 2418 MiB/s 0 0' 00:06:56.833 14:26:39 -- accel/accel.sh@20 -- # IFS=: 00:06:56.833 14:26:39 -- accel/accel.sh@20 -- # read -r var val 00:06:56.833 14:26:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:56.833 14:26:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:56.833 14:26:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.833 14:26:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.833 14:26:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.833 14:26:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.833 14:26:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.833 14:26:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.833 14:26:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.833 14:26:39 -- accel/accel.sh@42 -- # jq -r . 00:06:56.833 [2024-10-01 14:26:39.074006] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:56.833 [2024-10-01 14:26:39.074098] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid693768 ] 00:06:56.833 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.833 [2024-10-01 14:26:39.149601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.833 [2024-10-01 14:26:39.230278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.833 14:26:39 -- accel/accel.sh@21 -- # val= 00:06:56.833 14:26:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.833 14:26:39 -- accel/accel.sh@20 -- # IFS=: 00:06:56.833 14:26:39 -- accel/accel.sh@20 -- # read -r var val 00:06:56.833 14:26:39 -- accel/accel.sh@21 -- # val= 00:06:56.833 14:26:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.833 14:26:39 -- accel/accel.sh@20 -- # IFS=: 00:06:56.833 14:26:39 -- accel/accel.sh@20 -- # read -r var val 00:06:56.833 14:26:39 -- accel/accel.sh@21 -- # val=0x1 00:06:56.833 14:26:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.833 14:26:39 -- accel/accel.sh@20 -- # IFS=: 00:06:56.833 14:26:39 -- accel/accel.sh@20 -- # read -r var val 00:06:56.833 14:26:39 -- accel/accel.sh@21 -- # val= 00:06:56.833 14:26:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.833 14:26:39 -- accel/accel.sh@20 -- # IFS=: 00:06:56.833 14:26:39 -- accel/accel.sh@20 -- # read -r var val 00:06:56.833 14:26:39 -- accel/accel.sh@21 -- # val= 00:06:56.833 14:26:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.833 14:26:39 -- accel/accel.sh@20 -- # IFS=: 00:06:56.833 14:26:39 -- accel/accel.sh@20 -- # read -r var val 00:06:56.833 14:26:39 -- accel/accel.sh@21 -- # val=dualcast 00:06:56.833 14:26:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.833 14:26:39 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:56.833 14:26:39 -- accel/accel.sh@20 -- # IFS=: 00:06:56.833 14:26:39 -- accel/accel.sh@20 -- # read -r var val 00:06:56.833 14:26:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:56.833 14:26:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.833 14:26:39 -- accel/accel.sh@20 -- # IFS=: 00:06:56.833 14:26:39 -- accel/accel.sh@20 -- # read -r var val 00:06:56.833 14:26:39 -- accel/accel.sh@21 -- # val= 00:06:56.833 14:26:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.833 14:26:39 -- accel/accel.sh@20 -- # IFS=: 00:06:56.833 14:26:39 -- accel/accel.sh@20 -- # read -r var val 00:06:56.833 14:26:39 -- accel/accel.sh@21 -- # val=software 00:06:56.833 14:26:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.833 14:26:39 -- accel/accel.sh@23 -- # accel_module=software 00:06:56.833 14:26:39 -- accel/accel.sh@20 -- # IFS=: 00:06:56.833 14:26:39 -- accel/accel.sh@20 -- # read -r var val 00:06:56.833 14:26:39 -- accel/accel.sh@21 -- # val=32 00:06:56.833 14:26:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.833 14:26:39 -- accel/accel.sh@20 -- # IFS=: 00:06:56.833 14:26:39 -- accel/accel.sh@20 -- # read -r var val 00:06:56.833 14:26:39 -- accel/accel.sh@21 -- # val=32 00:06:56.833 14:26:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.833 14:26:39 -- accel/accel.sh@20 -- # IFS=: 00:06:56.833 14:26:39 -- accel/accel.sh@20 -- # read -r var val 00:06:56.833 14:26:39 -- accel/accel.sh@21 -- # val=1 00:06:56.833 14:26:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.833 14:26:39 -- accel/accel.sh@20 -- # IFS=: 00:06:56.834 14:26:39 -- accel/accel.sh@20 -- # read -r var val 00:06:56.834 14:26:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:56.834 14:26:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.834 14:26:39 -- accel/accel.sh@20 -- # IFS=: 00:06:56.834 14:26:39 -- accel/accel.sh@20 -- # read -r var val 00:06:56.834 14:26:39 -- accel/accel.sh@21 -- # val=Yes 00:06:56.834 14:26:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.834 14:26:39 -- accel/accel.sh@20 -- # IFS=: 00:06:56.834 14:26:39 -- accel/accel.sh@20 -- # read -r var val 00:06:56.834 14:26:39 -- accel/accel.sh@21 -- # val= 00:06:56.834 14:26:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.834 14:26:39 -- accel/accel.sh@20 -- # IFS=: 00:06:56.834 14:26:39 -- accel/accel.sh@20 -- # read -r var val 00:06:56.834 14:26:39 -- accel/accel.sh@21 -- # val= 00:06:56.834 14:26:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.834 14:26:39 -- accel/accel.sh@20 -- # IFS=: 00:06:56.834 14:26:39 -- accel/accel.sh@20 -- # read -r var val 00:06:58.215 14:26:40 -- accel/accel.sh@21 -- # val= 00:06:58.215 14:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.215 14:26:40 -- accel/accel.sh@20 -- # IFS=: 00:06:58.215 14:26:40 -- accel/accel.sh@20 -- # read -r var val 00:06:58.215 14:26:40 -- accel/accel.sh@21 -- # val= 00:06:58.215 14:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.215 14:26:40 -- accel/accel.sh@20 -- # IFS=: 00:06:58.215 14:26:40 -- accel/accel.sh@20 -- # read -r var val 00:06:58.215 14:26:40 -- accel/accel.sh@21 -- # val= 00:06:58.215 14:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.215 14:26:40 -- accel/accel.sh@20 -- # IFS=: 00:06:58.215 14:26:40 -- accel/accel.sh@20 -- # read -r var val 00:06:58.215 14:26:40 -- accel/accel.sh@21 -- # val= 00:06:58.215 14:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.215 14:26:40 -- accel/accel.sh@20 -- # IFS=: 00:06:58.215 14:26:40 -- accel/accel.sh@20 -- # read -r var val 00:06:58.215 14:26:40 -- accel/accel.sh@21 -- # val= 00:06:58.215 14:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.215 14:26:40 -- accel/accel.sh@20 -- # IFS=: 00:06:58.215 14:26:40 -- accel/accel.sh@20 -- # read -r var val 00:06:58.215 14:26:40 -- accel/accel.sh@21 -- # val= 00:06:58.215 14:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.215 14:26:40 -- accel/accel.sh@20 -- # IFS=: 00:06:58.215 14:26:40 -- accel/accel.sh@20 -- # read -r var val 00:06:58.215 14:26:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:58.215 14:26:40 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:58.215 14:26:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.215 00:06:58.215 real 0m2.749s 00:06:58.215 user 0m2.464s 00:06:58.215 sys 0m0.291s 00:06:58.215 14:26:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.215 14:26:40 -- common/autotest_common.sh@10 -- # set +x 00:06:58.215 ************************************ 00:06:58.215 END TEST accel_dualcast 00:06:58.215 ************************************ 00:06:58.215 14:26:40 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:58.215 14:26:40 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:58.215 14:26:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.215 14:26:40 -- common/autotest_common.sh@10 -- # set +x 00:06:58.215 ************************************ 00:06:58.215 START TEST accel_compare 00:06:58.215 ************************************ 00:06:58.215 14:26:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:06:58.215 14:26:40 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.215 14:26:40 -- accel/accel.sh@17 -- # local accel_module 00:06:58.215 14:26:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:58.215 14:26:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:58.215 14:26:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.215 14:26:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.215 14:26:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.215 14:26:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.215 14:26:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.215 14:26:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.215 14:26:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.215 14:26:40 -- accel/accel.sh@42 -- # jq -r . 00:06:58.215 [2024-10-01 14:26:40.502716] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:58.215 [2024-10-01 14:26:40.502809] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid693994 ] 00:06:58.215 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.215 [2024-10-01 14:26:40.579611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.215 [2024-10-01 14:26:40.660383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.596 14:26:41 -- accel/accel.sh@18 -- # out=' 00:06:59.596 SPDK Configuration: 00:06:59.596 Core mask: 0x1 00:06:59.596 00:06:59.596 Accel Perf Configuration: 00:06:59.596 Workload Type: compare 00:06:59.596 Transfer size: 4096 bytes 00:06:59.596 Vector count 1 00:06:59.596 Module: software 00:06:59.596 Queue depth: 32 00:06:59.596 Allocate depth: 32 00:06:59.596 # threads/core: 1 00:06:59.596 Run time: 1 seconds 00:06:59.596 Verify: Yes 00:06:59.596 00:06:59.596 Running for 1 seconds... 00:06:59.596 00:06:59.596 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:59.596 ------------------------------------------------------------------------------------ 00:06:59.596 0,0 795456/s 3107 MiB/s 0 0 00:06:59.596 ==================================================================================== 00:06:59.596 Total 795456/s 3107 MiB/s 0 0' 00:06:59.596 14:26:41 -- accel/accel.sh@20 -- # IFS=: 00:06:59.596 14:26:41 -- accel/accel.sh@20 -- # read -r var val 00:06:59.596 14:26:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:59.596 14:26:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:59.596 14:26:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.596 14:26:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.596 14:26:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.596 14:26:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.596 14:26:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.596 14:26:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.596 14:26:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.596 14:26:41 -- accel/accel.sh@42 -- # jq -r . 00:06:59.596 [2024-10-01 14:26:41.876662] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:59.596 [2024-10-01 14:26:41.876766] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid694178 ] 00:06:59.596 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.597 [2024-10-01 14:26:41.954103] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.597 [2024-10-01 14:26:42.034982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.597 14:26:42 -- accel/accel.sh@21 -- # val= 00:06:59.597 14:26:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # IFS=: 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # read -r var val 00:06:59.597 14:26:42 -- accel/accel.sh@21 -- # val= 00:06:59.597 14:26:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # IFS=: 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # read -r var val 00:06:59.597 14:26:42 -- accel/accel.sh@21 -- # val=0x1 00:06:59.597 14:26:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # IFS=: 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # read -r var val 00:06:59.597 14:26:42 -- accel/accel.sh@21 -- # val= 00:06:59.597 14:26:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # IFS=: 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # read -r var val 00:06:59.597 14:26:42 -- accel/accel.sh@21 -- # val= 00:06:59.597 14:26:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # IFS=: 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # read -r var val 00:06:59.597 14:26:42 -- accel/accel.sh@21 -- # val=compare 00:06:59.597 14:26:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.597 14:26:42 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # IFS=: 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # read -r var val 00:06:59.597 14:26:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.597 14:26:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # IFS=: 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # read -r var val 00:06:59.597 14:26:42 -- accel/accel.sh@21 -- # val= 00:06:59.597 14:26:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # IFS=: 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # read -r var val 00:06:59.597 14:26:42 -- accel/accel.sh@21 -- # val=software 00:06:59.597 14:26:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.597 14:26:42 -- accel/accel.sh@23 -- # accel_module=software 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # IFS=: 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # read -r var val 00:06:59.597 14:26:42 -- accel/accel.sh@21 -- # val=32 00:06:59.597 14:26:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # IFS=: 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # read -r var val 00:06:59.597 14:26:42 -- accel/accel.sh@21 -- # val=32 00:06:59.597 14:26:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # IFS=: 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # read -r var val 00:06:59.597 14:26:42 -- accel/accel.sh@21 -- # val=1 00:06:59.597 14:26:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # IFS=: 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # read -r var val 00:06:59.597 14:26:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:59.597 14:26:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # IFS=: 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # read -r var val 00:06:59.597 14:26:42 -- accel/accel.sh@21 -- # val=Yes 00:06:59.597 14:26:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # IFS=: 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # read -r var val 00:06:59.597 14:26:42 -- accel/accel.sh@21 -- # val= 00:06:59.597 14:26:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # IFS=: 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # read -r var val 00:06:59.597 14:26:42 -- accel/accel.sh@21 -- # val= 00:06:59.597 14:26:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # IFS=: 00:06:59.597 14:26:42 -- accel/accel.sh@20 -- # read -r var val 00:07:00.976 14:26:43 -- accel/accel.sh@21 -- # val= 00:07:00.976 14:26:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.976 14:26:43 -- accel/accel.sh@20 -- # IFS=: 00:07:00.976 14:26:43 -- accel/accel.sh@20 -- # read -r var val 00:07:00.976 14:26:43 -- accel/accel.sh@21 -- # val= 00:07:00.976 14:26:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.976 14:26:43 -- accel/accel.sh@20 -- # IFS=: 00:07:00.976 14:26:43 -- accel/accel.sh@20 -- # read -r var val 00:07:00.976 14:26:43 -- accel/accel.sh@21 -- # val= 00:07:00.976 14:26:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.976 14:26:43 -- accel/accel.sh@20 -- # IFS=: 00:07:00.976 14:26:43 -- accel/accel.sh@20 -- # read -r var val 00:07:00.976 14:26:43 -- accel/accel.sh@21 -- # val= 00:07:00.976 14:26:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.976 14:26:43 -- accel/accel.sh@20 -- # IFS=: 00:07:00.976 14:26:43 -- accel/accel.sh@20 -- # read -r var val 00:07:00.976 14:26:43 -- accel/accel.sh@21 -- # val= 00:07:00.976 14:26:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.976 14:26:43 -- accel/accel.sh@20 -- # IFS=: 00:07:00.976 14:26:43 -- accel/accel.sh@20 -- # read -r var val 00:07:00.976 14:26:43 -- accel/accel.sh@21 -- # val= 00:07:00.976 14:26:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.976 14:26:43 -- accel/accel.sh@20 -- # IFS=: 00:07:00.976 14:26:43 -- accel/accel.sh@20 -- # read -r var val 00:07:00.976 14:26:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:00.976 14:26:43 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:00.976 14:26:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.976 00:07:00.976 real 0m2.754s 00:07:00.976 user 0m2.473s 00:07:00.976 sys 0m0.286s 00:07:00.976 14:26:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.976 14:26:43 -- common/autotest_common.sh@10 -- # set +x 00:07:00.976 ************************************ 00:07:00.976 END TEST accel_compare 00:07:00.976 ************************************ 00:07:00.976 14:26:43 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:00.976 14:26:43 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:00.976 14:26:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:00.976 14:26:43 -- common/autotest_common.sh@10 -- # set +x 00:07:00.976 ************************************ 00:07:00.976 START TEST accel_xor 00:07:00.976 ************************************ 00:07:00.976 14:26:43 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:07:00.976 14:26:43 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.976 14:26:43 -- accel/accel.sh@17 -- # local accel_module 00:07:00.976 14:26:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:00.976 14:26:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:00.976 14:26:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.976 14:26:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.976 14:26:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.976 14:26:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.976 14:26:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.976 14:26:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.976 14:26:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.976 14:26:43 -- accel/accel.sh@42 -- # jq -r . 00:07:00.976 [2024-10-01 14:26:43.305167] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:00.976 [2024-10-01 14:26:43.305243] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid694413 ] 00:07:00.976 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.976 [2024-10-01 14:26:43.383455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.976 [2024-10-01 14:26:43.465653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.357 14:26:44 -- accel/accel.sh@18 -- # out=' 00:07:02.357 SPDK Configuration: 00:07:02.357 Core mask: 0x1 00:07:02.357 00:07:02.357 Accel Perf Configuration: 00:07:02.357 Workload Type: xor 00:07:02.357 Source buffers: 2 00:07:02.357 Transfer size: 4096 bytes 00:07:02.357 Vector count 1 00:07:02.358 Module: software 00:07:02.358 Queue depth: 32 00:07:02.358 Allocate depth: 32 00:07:02.358 # threads/core: 1 00:07:02.358 Run time: 1 seconds 00:07:02.358 Verify: Yes 00:07:02.358 00:07:02.358 Running for 1 seconds... 00:07:02.358 00:07:02.358 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:02.358 ------------------------------------------------------------------------------------ 00:07:02.358 0,0 707232/s 2762 MiB/s 0 0 00:07:02.358 ==================================================================================== 00:07:02.358 Total 707232/s 2762 MiB/s 0 0' 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:02.358 14:26:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:02.358 14:26:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:02.358 14:26:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.358 14:26:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.358 14:26:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.358 14:26:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.358 14:26:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.358 14:26:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.358 14:26:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.358 14:26:44 -- accel/accel.sh@42 -- # jq -r . 00:07:02.358 [2024-10-01 14:26:44.672808] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:02.358 [2024-10-01 14:26:44.672896] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid694614 ] 00:07:02.358 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.358 [2024-10-01 14:26:44.746985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.358 [2024-10-01 14:26:44.824267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.358 14:26:44 -- accel/accel.sh@21 -- # val= 00:07:02.358 14:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:02.358 14:26:44 -- accel/accel.sh@21 -- # val= 00:07:02.358 14:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:02.358 14:26:44 -- accel/accel.sh@21 -- # val=0x1 00:07:02.358 14:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:02.358 14:26:44 -- accel/accel.sh@21 -- # val= 00:07:02.358 14:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:02.358 14:26:44 -- accel/accel.sh@21 -- # val= 00:07:02.358 14:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:02.358 14:26:44 -- accel/accel.sh@21 -- # val=xor 00:07:02.358 14:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.358 14:26:44 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:02.358 14:26:44 -- accel/accel.sh@21 -- # val=2 00:07:02.358 14:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:02.358 14:26:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:02.358 14:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:02.358 14:26:44 -- accel/accel.sh@21 -- # val= 00:07:02.358 14:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:02.358 14:26:44 -- accel/accel.sh@21 -- # val=software 00:07:02.358 14:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.358 14:26:44 -- accel/accel.sh@23 -- # accel_module=software 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:02.358 14:26:44 -- accel/accel.sh@21 -- # val=32 00:07:02.358 14:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:02.358 14:26:44 -- accel/accel.sh@21 -- # val=32 00:07:02.358 14:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:02.358 14:26:44 -- accel/accel.sh@21 -- # val=1 00:07:02.358 14:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:02.358 14:26:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:02.358 14:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:02.358 14:26:44 -- accel/accel.sh@21 -- # val=Yes 00:07:02.358 14:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:02.358 14:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:02.617 14:26:44 -- accel/accel.sh@21 -- # val= 00:07:02.617 14:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.617 14:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:02.617 14:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:02.617 14:26:44 -- accel/accel.sh@21 -- # val= 00:07:02.617 14:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.617 14:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:02.617 14:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:03.556 14:26:45 -- accel/accel.sh@21 -- # val= 00:07:03.556 14:26:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.556 14:26:45 -- accel/accel.sh@20 -- # IFS=: 00:07:03.556 14:26:45 -- accel/accel.sh@20 -- # read -r var val 00:07:03.556 14:26:45 -- accel/accel.sh@21 -- # val= 00:07:03.556 14:26:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.556 14:26:46 -- accel/accel.sh@20 -- # IFS=: 00:07:03.556 14:26:46 -- accel/accel.sh@20 -- # read -r var val 00:07:03.556 14:26:46 -- accel/accel.sh@21 -- # val= 00:07:03.556 14:26:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.556 14:26:46 -- accel/accel.sh@20 -- # IFS=: 00:07:03.556 14:26:46 -- accel/accel.sh@20 -- # read -r var val 00:07:03.556 14:26:46 -- accel/accel.sh@21 -- # val= 00:07:03.556 14:26:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.556 14:26:46 -- accel/accel.sh@20 -- # IFS=: 00:07:03.556 14:26:46 -- accel/accel.sh@20 -- # read -r var val 00:07:03.556 14:26:46 -- accel/accel.sh@21 -- # val= 00:07:03.556 14:26:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.556 14:26:46 -- accel/accel.sh@20 -- # IFS=: 00:07:03.556 14:26:46 -- accel/accel.sh@20 -- # read -r var val 00:07:03.556 14:26:46 -- accel/accel.sh@21 -- # val= 00:07:03.556 14:26:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.556 14:26:46 -- accel/accel.sh@20 -- # IFS=: 00:07:03.556 14:26:46 -- accel/accel.sh@20 -- # read -r var val 00:07:03.556 14:26:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.556 14:26:46 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:03.556 14:26:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.556 00:07:03.556 real 0m2.724s 00:07:03.556 user 0m2.467s 00:07:03.556 sys 0m0.263s 00:07:03.556 14:26:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.557 14:26:46 -- common/autotest_common.sh@10 -- # set +x 00:07:03.557 ************************************ 00:07:03.557 END TEST accel_xor 00:07:03.557 ************************************ 00:07:03.557 14:26:46 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:03.557 14:26:46 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:03.557 14:26:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:03.557 14:26:46 -- common/autotest_common.sh@10 -- # set +x 00:07:03.557 ************************************ 00:07:03.557 START TEST accel_xor 00:07:03.557 ************************************ 00:07:03.557 14:26:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:07:03.557 14:26:46 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.557 14:26:46 -- accel/accel.sh@17 -- # local accel_module 00:07:03.557 14:26:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:03.557 14:26:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:03.557 14:26:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.557 14:26:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.557 14:26:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.557 14:26:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.557 14:26:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.557 14:26:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.557 14:26:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.557 14:26:46 -- accel/accel.sh@42 -- # jq -r . 00:07:03.557 [2024-10-01 14:26:46.075609] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:03.557 [2024-10-01 14:26:46.075697] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid694813 ] 00:07:03.817 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.817 [2024-10-01 14:26:46.149381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.817 [2024-10-01 14:26:46.230845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.367 14:26:47 -- accel/accel.sh@18 -- # out=' 00:07:05.367 SPDK Configuration: 00:07:05.367 Core mask: 0x1 00:07:05.367 00:07:05.367 Accel Perf Configuration: 00:07:05.367 Workload Type: xor 00:07:05.367 Source buffers: 3 00:07:05.367 Transfer size: 4096 bytes 00:07:05.367 Vector count 1 00:07:05.367 Module: software 00:07:05.367 Queue depth: 32 00:07:05.367 Allocate depth: 32 00:07:05.367 # threads/core: 1 00:07:05.367 Run time: 1 seconds 00:07:05.367 Verify: Yes 00:07:05.367 00:07:05.367 Running for 1 seconds... 00:07:05.367 00:07:05.367 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:05.367 ------------------------------------------------------------------------------------ 00:07:05.367 0,0 639232/s 2497 MiB/s 0 0 00:07:05.367 ==================================================================================== 00:07:05.367 Total 639232/s 2497 MiB/s 0 0' 00:07:05.367 14:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:05.367 14:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:05.367 14:26:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:05.367 14:26:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:05.367 14:26:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.367 14:26:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.367 14:26:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.367 14:26:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.367 14:26:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.367 14:26:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.367 14:26:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.367 14:26:47 -- accel/accel.sh@42 -- # jq -r . 00:07:05.367 [2024-10-01 14:26:47.443529] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:05.367 [2024-10-01 14:26:47.443617] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid694992 ] 00:07:05.367 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.367 [2024-10-01 14:26:47.520136] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.367 [2024-10-01 14:26:47.601830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.368 14:26:47 -- accel/accel.sh@21 -- # val= 00:07:05.368 14:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:05.368 14:26:47 -- accel/accel.sh@21 -- # val= 00:07:05.368 14:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:05.368 14:26:47 -- accel/accel.sh@21 -- # val=0x1 00:07:05.368 14:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:05.368 14:26:47 -- accel/accel.sh@21 -- # val= 00:07:05.368 14:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:05.368 14:26:47 -- accel/accel.sh@21 -- # val= 00:07:05.368 14:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:05.368 14:26:47 -- accel/accel.sh@21 -- # val=xor 00:07:05.368 14:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.368 14:26:47 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:05.368 14:26:47 -- accel/accel.sh@21 -- # val=3 00:07:05.368 14:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:05.368 14:26:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:05.368 14:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:05.368 14:26:47 -- accel/accel.sh@21 -- # val= 00:07:05.368 14:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:05.368 14:26:47 -- accel/accel.sh@21 -- # val=software 00:07:05.368 14:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.368 14:26:47 -- accel/accel.sh@23 -- # accel_module=software 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:05.368 14:26:47 -- accel/accel.sh@21 -- # val=32 00:07:05.368 14:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:05.368 14:26:47 -- accel/accel.sh@21 -- # val=32 00:07:05.368 14:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:05.368 14:26:47 -- accel/accel.sh@21 -- # val=1 00:07:05.368 14:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:05.368 14:26:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:05.368 14:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:05.368 14:26:47 -- accel/accel.sh@21 -- # val=Yes 00:07:05.368 14:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:05.368 14:26:47 -- accel/accel.sh@21 -- # val= 00:07:05.368 14:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:05.368 14:26:47 -- accel/accel.sh@21 -- # val= 00:07:05.368 14:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:05.368 14:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:06.360 14:26:48 -- accel/accel.sh@21 -- # val= 00:07:06.360 14:26:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.360 14:26:48 -- accel/accel.sh@20 -- # IFS=: 00:07:06.360 14:26:48 -- accel/accel.sh@20 -- # read -r var val 00:07:06.360 14:26:48 -- accel/accel.sh@21 -- # val= 00:07:06.360 14:26:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.360 14:26:48 -- accel/accel.sh@20 -- # IFS=: 00:07:06.360 14:26:48 -- accel/accel.sh@20 -- # read -r var val 00:07:06.360 14:26:48 -- accel/accel.sh@21 -- # val= 00:07:06.360 14:26:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.360 14:26:48 -- accel/accel.sh@20 -- # IFS=: 00:07:06.360 14:26:48 -- accel/accel.sh@20 -- # read -r var val 00:07:06.360 14:26:48 -- accel/accel.sh@21 -- # val= 00:07:06.360 14:26:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.360 14:26:48 -- accel/accel.sh@20 -- # IFS=: 00:07:06.360 14:26:48 -- accel/accel.sh@20 -- # read -r var val 00:07:06.360 14:26:48 -- accel/accel.sh@21 -- # val= 00:07:06.360 14:26:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.360 14:26:48 -- accel/accel.sh@20 -- # IFS=: 00:07:06.360 14:26:48 -- accel/accel.sh@20 -- # read -r var val 00:07:06.360 14:26:48 -- accel/accel.sh@21 -- # val= 00:07:06.360 14:26:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.360 14:26:48 -- accel/accel.sh@20 -- # IFS=: 00:07:06.360 14:26:48 -- accel/accel.sh@20 -- # read -r var val 00:07:06.360 14:26:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:06.360 14:26:48 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:06.360 14:26:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.360 00:07:06.360 real 0m2.748s 00:07:06.360 user 0m2.466s 00:07:06.360 sys 0m0.289s 00:07:06.360 14:26:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.360 14:26:48 -- common/autotest_common.sh@10 -- # set +x 00:07:06.360 ************************************ 00:07:06.360 END TEST accel_xor 00:07:06.360 ************************************ 00:07:06.360 14:26:48 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:06.360 14:26:48 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:06.360 14:26:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.360 14:26:48 -- common/autotest_common.sh@10 -- # set +x 00:07:06.360 ************************************ 00:07:06.360 START TEST accel_dif_verify 00:07:06.360 ************************************ 00:07:06.360 14:26:48 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:06.360 14:26:48 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.360 14:26:48 -- accel/accel.sh@17 -- # local accel_module 00:07:06.360 14:26:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:06.360 14:26:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:06.360 14:26:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.360 14:26:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.360 14:26:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.361 14:26:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.361 14:26:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.361 14:26:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.361 14:26:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.361 14:26:48 -- accel/accel.sh@42 -- # jq -r . 00:07:06.361 [2024-10-01 14:26:48.872028] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:06.361 [2024-10-01 14:26:48.872117] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid695201 ] 00:07:06.620 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.620 [2024-10-01 14:26:48.945924] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.620 [2024-10-01 14:26:49.026470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.000 14:26:50 -- accel/accel.sh@18 -- # out=' 00:07:08.000 SPDK Configuration: 00:07:08.000 Core mask: 0x1 00:07:08.000 00:07:08.000 Accel Perf Configuration: 00:07:08.000 Workload Type: dif_verify 00:07:08.000 Vector size: 4096 bytes 00:07:08.000 Transfer size: 4096 bytes 00:07:08.000 Block size: 512 bytes 00:07:08.000 Metadata size: 8 bytes 00:07:08.000 Vector count 1 00:07:08.000 Module: software 00:07:08.000 Queue depth: 32 00:07:08.000 Allocate depth: 32 00:07:08.000 # threads/core: 1 00:07:08.000 Run time: 1 seconds 00:07:08.000 Verify: No 00:07:08.000 00:07:08.000 Running for 1 seconds... 00:07:08.000 00:07:08.000 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:08.000 ------------------------------------------------------------------------------------ 00:07:08.000 0,0 233088/s 924 MiB/s 0 0 00:07:08.000 ==================================================================================== 00:07:08.000 Total 233088/s 910 MiB/s 0 0' 00:07:08.000 14:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:08.000 14:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:08.000 14:26:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:08.000 14:26:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:08.000 14:26:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.000 14:26:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.000 14:26:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.000 14:26:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.000 14:26:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.000 14:26:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.000 14:26:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.000 14:26:50 -- accel/accel.sh@42 -- # jq -r . 00:07:08.000 [2024-10-01 14:26:50.243523] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:08.000 [2024-10-01 14:26:50.243612] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid695379 ] 00:07:08.000 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.000 [2024-10-01 14:26:50.318730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.000 [2024-10-01 14:26:50.402101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.000 14:26:50 -- accel/accel.sh@21 -- # val= 00:07:08.000 14:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.000 14:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:08.000 14:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:08.000 14:26:50 -- accel/accel.sh@21 -- # val= 00:07:08.000 14:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.000 14:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:08.000 14:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:08.000 14:26:50 -- accel/accel.sh@21 -- # val=0x1 00:07:08.001 14:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:08.001 14:26:50 -- accel/accel.sh@21 -- # val= 00:07:08.001 14:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:08.001 14:26:50 -- accel/accel.sh@21 -- # val= 00:07:08.001 14:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:08.001 14:26:50 -- accel/accel.sh@21 -- # val=dif_verify 00:07:08.001 14:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.001 14:26:50 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:08.001 14:26:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:08.001 14:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:08.001 14:26:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:08.001 14:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:08.001 14:26:50 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:08.001 14:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:08.001 14:26:50 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:08.001 14:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:08.001 14:26:50 -- accel/accel.sh@21 -- # val= 00:07:08.001 14:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:08.001 14:26:50 -- accel/accel.sh@21 -- # val=software 00:07:08.001 14:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.001 14:26:50 -- accel/accel.sh@23 -- # accel_module=software 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:08.001 14:26:50 -- accel/accel.sh@21 -- # val=32 00:07:08.001 14:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:08.001 14:26:50 -- accel/accel.sh@21 -- # val=32 00:07:08.001 14:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:08.001 14:26:50 -- accel/accel.sh@21 -- # val=1 00:07:08.001 14:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:08.001 14:26:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:08.001 14:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:08.001 14:26:50 -- accel/accel.sh@21 -- # val=No 00:07:08.001 14:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:08.001 14:26:50 -- accel/accel.sh@21 -- # val= 00:07:08.001 14:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:08.001 14:26:50 -- accel/accel.sh@21 -- # val= 00:07:08.001 14:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:08.001 14:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:09.382 14:26:51 -- accel/accel.sh@21 -- # val= 00:07:09.382 14:26:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.382 14:26:51 -- accel/accel.sh@20 -- # IFS=: 00:07:09.382 14:26:51 -- accel/accel.sh@20 -- # read -r var val 00:07:09.382 14:26:51 -- accel/accel.sh@21 -- # val= 00:07:09.382 14:26:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.382 14:26:51 -- accel/accel.sh@20 -- # IFS=: 00:07:09.382 14:26:51 -- accel/accel.sh@20 -- # read -r var val 00:07:09.382 14:26:51 -- accel/accel.sh@21 -- # val= 00:07:09.382 14:26:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.382 14:26:51 -- accel/accel.sh@20 -- # IFS=: 00:07:09.382 14:26:51 -- accel/accel.sh@20 -- # read -r var val 00:07:09.382 14:26:51 -- accel/accel.sh@21 -- # val= 00:07:09.382 14:26:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.382 14:26:51 -- accel/accel.sh@20 -- # IFS=: 00:07:09.382 14:26:51 -- accel/accel.sh@20 -- # read -r var val 00:07:09.382 14:26:51 -- accel/accel.sh@21 -- # val= 00:07:09.382 14:26:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.382 14:26:51 -- accel/accel.sh@20 -- # IFS=: 00:07:09.382 14:26:51 -- accel/accel.sh@20 -- # read -r var val 00:07:09.382 14:26:51 -- accel/accel.sh@21 -- # val= 00:07:09.382 14:26:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.382 14:26:51 -- accel/accel.sh@20 -- # IFS=: 00:07:09.382 14:26:51 -- accel/accel.sh@20 -- # read -r var val 00:07:09.382 14:26:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:09.382 14:26:51 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:09.382 14:26:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.382 00:07:09.382 real 0m2.752s 00:07:09.382 user 0m2.478s 00:07:09.382 sys 0m0.281s 00:07:09.382 14:26:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.382 14:26:51 -- common/autotest_common.sh@10 -- # set +x 00:07:09.382 ************************************ 00:07:09.382 END TEST accel_dif_verify 00:07:09.382 ************************************ 00:07:09.382 14:26:51 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:09.382 14:26:51 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:09.382 14:26:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:09.382 14:26:51 -- common/autotest_common.sh@10 -- # set +x 00:07:09.382 ************************************ 00:07:09.382 START TEST accel_dif_generate 00:07:09.382 ************************************ 00:07:09.382 14:26:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:09.382 14:26:51 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.382 14:26:51 -- accel/accel.sh@17 -- # local accel_module 00:07:09.382 14:26:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:09.382 14:26:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.382 14:26:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.382 14:26:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:09.382 14:26:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.382 14:26:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.382 14:26:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.382 14:26:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.382 14:26:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.382 14:26:51 -- accel/accel.sh@42 -- # jq -r . 00:07:09.382 [2024-10-01 14:26:51.670659] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:09.382 [2024-10-01 14:26:51.670739] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid695574 ] 00:07:09.382 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.382 [2024-10-01 14:26:51.745860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.382 [2024-10-01 14:26:51.833970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.760 14:26:53 -- accel/accel.sh@18 -- # out=' 00:07:10.760 SPDK Configuration: 00:07:10.760 Core mask: 0x1 00:07:10.760 00:07:10.760 Accel Perf Configuration: 00:07:10.760 Workload Type: dif_generate 00:07:10.760 Vector size: 4096 bytes 00:07:10.760 Transfer size: 4096 bytes 00:07:10.760 Block size: 512 bytes 00:07:10.760 Metadata size: 8 bytes 00:07:10.760 Vector count 1 00:07:10.760 Module: software 00:07:10.760 Queue depth: 32 00:07:10.760 Allocate depth: 32 00:07:10.760 # threads/core: 1 00:07:10.760 Run time: 1 seconds 00:07:10.760 Verify: No 00:07:10.760 00:07:10.760 Running for 1 seconds... 00:07:10.760 00:07:10.760 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:10.760 ------------------------------------------------------------------------------------ 00:07:10.760 0,0 279712/s 1109 MiB/s 0 0 00:07:10.760 ==================================================================================== 00:07:10.760 Total 279712/s 1092 MiB/s 0 0' 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:10.760 14:26:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:10.760 14:26:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.760 14:26:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:10.760 14:26:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.760 14:26:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.760 14:26:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.760 14:26:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.760 14:26:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.760 14:26:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.760 14:26:53 -- accel/accel.sh@42 -- # jq -r . 00:07:10.760 [2024-10-01 14:26:53.048355] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:10.760 [2024-10-01 14:26:53.048444] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid695761 ] 00:07:10.760 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.760 [2024-10-01 14:26:53.122280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.760 [2024-10-01 14:26:53.203297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.760 14:26:53 -- accel/accel.sh@21 -- # val= 00:07:10.760 14:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:10.760 14:26:53 -- accel/accel.sh@21 -- # val= 00:07:10.760 14:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:10.760 14:26:53 -- accel/accel.sh@21 -- # val=0x1 00:07:10.760 14:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:10.760 14:26:53 -- accel/accel.sh@21 -- # val= 00:07:10.760 14:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:10.760 14:26:53 -- accel/accel.sh@21 -- # val= 00:07:10.760 14:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:10.760 14:26:53 -- accel/accel.sh@21 -- # val=dif_generate 00:07:10.760 14:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.760 14:26:53 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:10.760 14:26:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.760 14:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:10.760 14:26:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.760 14:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:10.760 14:26:53 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:10.760 14:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:10.760 14:26:53 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:10.760 14:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:10.760 14:26:53 -- accel/accel.sh@21 -- # val= 00:07:10.760 14:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:10.760 14:26:53 -- accel/accel.sh@21 -- # val=software 00:07:10.760 14:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.760 14:26:53 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:10.760 14:26:53 -- accel/accel.sh@21 -- # val=32 00:07:10.760 14:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:10.760 14:26:53 -- accel/accel.sh@21 -- # val=32 00:07:10.760 14:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:10.760 14:26:53 -- accel/accel.sh@21 -- # val=1 00:07:10.760 14:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:10.760 14:26:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:10.760 14:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:10.760 14:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:10.760 14:26:53 -- accel/accel.sh@21 -- # val=No 00:07:10.761 14:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.761 14:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:10.761 14:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:10.761 14:26:53 -- accel/accel.sh@21 -- # val= 00:07:10.761 14:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.761 14:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:10.761 14:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:10.761 14:26:53 -- accel/accel.sh@21 -- # val= 00:07:10.761 14:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.761 14:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:10.761 14:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:12.140 14:26:54 -- accel/accel.sh@21 -- # val= 00:07:12.140 14:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.140 14:26:54 -- accel/accel.sh@20 -- # IFS=: 00:07:12.140 14:26:54 -- accel/accel.sh@20 -- # read -r var val 00:07:12.140 14:26:54 -- accel/accel.sh@21 -- # val= 00:07:12.140 14:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.140 14:26:54 -- accel/accel.sh@20 -- # IFS=: 00:07:12.140 14:26:54 -- accel/accel.sh@20 -- # read -r var val 00:07:12.140 14:26:54 -- accel/accel.sh@21 -- # val= 00:07:12.140 14:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.140 14:26:54 -- accel/accel.sh@20 -- # IFS=: 00:07:12.140 14:26:54 -- accel/accel.sh@20 -- # read -r var val 00:07:12.140 14:26:54 -- accel/accel.sh@21 -- # val= 00:07:12.140 14:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.140 14:26:54 -- accel/accel.sh@20 -- # IFS=: 00:07:12.140 14:26:54 -- accel/accel.sh@20 -- # read -r var val 00:07:12.140 14:26:54 -- accel/accel.sh@21 -- # val= 00:07:12.140 14:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.140 14:26:54 -- accel/accel.sh@20 -- # IFS=: 00:07:12.140 14:26:54 -- accel/accel.sh@20 -- # read -r var val 00:07:12.140 14:26:54 -- accel/accel.sh@21 -- # val= 00:07:12.140 14:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.140 14:26:54 -- accel/accel.sh@20 -- # IFS=: 00:07:12.140 14:26:54 -- accel/accel.sh@20 -- # read -r var val 00:07:12.140 14:26:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:12.140 14:26:54 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:12.140 14:26:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.140 00:07:12.140 real 0m2.752s 00:07:12.140 user 0m2.468s 00:07:12.140 sys 0m0.292s 00:07:12.140 14:26:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.140 14:26:54 -- common/autotest_common.sh@10 -- # set +x 00:07:12.140 ************************************ 00:07:12.140 END TEST accel_dif_generate 00:07:12.140 ************************************ 00:07:12.140 14:26:54 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:12.140 14:26:54 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:12.140 14:26:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:12.140 14:26:54 -- common/autotest_common.sh@10 -- # set +x 00:07:12.140 ************************************ 00:07:12.140 START TEST accel_dif_generate_copy 00:07:12.140 ************************************ 00:07:12.140 14:26:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:12.140 14:26:54 -- accel/accel.sh@16 -- # local accel_opc 00:07:12.140 14:26:54 -- accel/accel.sh@17 -- # local accel_module 00:07:12.140 14:26:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:12.140 14:26:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.140 14:26:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.140 14:26:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:12.140 14:26:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.140 14:26:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.141 14:26:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.141 14:26:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.141 14:26:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.141 14:26:54 -- accel/accel.sh@42 -- # jq -r . 00:07:12.141 [2024-10-01 14:26:54.470571] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:12.141 [2024-10-01 14:26:54.470661] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid695960 ] 00:07:12.141 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.141 [2024-10-01 14:26:54.544643] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.141 [2024-10-01 14:26:54.622693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.520 14:26:55 -- accel/accel.sh@18 -- # out=' 00:07:13.520 SPDK Configuration: 00:07:13.520 Core mask: 0x1 00:07:13.520 00:07:13.520 Accel Perf Configuration: 00:07:13.520 Workload Type: dif_generate_copy 00:07:13.520 Vector size: 4096 bytes 00:07:13.520 Transfer size: 4096 bytes 00:07:13.520 Vector count 1 00:07:13.520 Module: software 00:07:13.520 Queue depth: 32 00:07:13.520 Allocate depth: 32 00:07:13.520 # threads/core: 1 00:07:13.520 Run time: 1 seconds 00:07:13.520 Verify: No 00:07:13.520 00:07:13.520 Running for 1 seconds... 00:07:13.520 00:07:13.520 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:13.520 ------------------------------------------------------------------------------------ 00:07:13.520 0,0 216992/s 860 MiB/s 0 0 00:07:13.520 ==================================================================================== 00:07:13.520 Total 216992/s 847 MiB/s 0 0' 00:07:13.520 14:26:55 -- accel/accel.sh@20 -- # IFS=: 00:07:13.520 14:26:55 -- accel/accel.sh@20 -- # read -r var val 00:07:13.520 14:26:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:13.520 14:26:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:13.520 14:26:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.520 14:26:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.520 14:26:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.520 14:26:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.520 14:26:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.520 14:26:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.520 14:26:55 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.520 14:26:55 -- accel/accel.sh@42 -- # jq -r . 00:07:13.520 [2024-10-01 14:26:55.822308] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:13.520 [2024-10-01 14:26:55.822396] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid696139 ] 00:07:13.520 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.520 [2024-10-01 14:26:55.896734] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.520 [2024-10-01 14:26:55.975006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.520 14:26:56 -- accel/accel.sh@21 -- # val= 00:07:13.520 14:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:13.520 14:26:56 -- accel/accel.sh@21 -- # val= 00:07:13.520 14:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:13.520 14:26:56 -- accel/accel.sh@21 -- # val=0x1 00:07:13.520 14:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:13.520 14:26:56 -- accel/accel.sh@21 -- # val= 00:07:13.520 14:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:13.520 14:26:56 -- accel/accel.sh@21 -- # val= 00:07:13.520 14:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:13.520 14:26:56 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:13.520 14:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.520 14:26:56 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:13.520 14:26:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:13.520 14:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:13.520 14:26:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:13.520 14:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:13.520 14:26:56 -- accel/accel.sh@21 -- # val= 00:07:13.520 14:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:13.520 14:26:56 -- accel/accel.sh@21 -- # val=software 00:07:13.520 14:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.520 14:26:56 -- accel/accel.sh@23 -- # accel_module=software 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:13.520 14:26:56 -- accel/accel.sh@21 -- # val=32 00:07:13.520 14:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:13.520 14:26:56 -- accel/accel.sh@21 -- # val=32 00:07:13.520 14:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:13.520 14:26:56 -- accel/accel.sh@21 -- # val=1 00:07:13.520 14:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:13.520 14:26:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:13.520 14:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:13.520 14:26:56 -- accel/accel.sh@21 -- # val=No 00:07:13.520 14:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:13.520 14:26:56 -- accel/accel.sh@21 -- # val= 00:07:13.520 14:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:13.520 14:26:56 -- accel/accel.sh@21 -- # val= 00:07:13.520 14:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:13.520 14:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:14.899 14:26:57 -- accel/accel.sh@21 -- # val= 00:07:14.899 14:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.899 14:26:57 -- accel/accel.sh@20 -- # IFS=: 00:07:14.899 14:26:57 -- accel/accel.sh@20 -- # read -r var val 00:07:14.899 14:26:57 -- accel/accel.sh@21 -- # val= 00:07:14.899 14:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.899 14:26:57 -- accel/accel.sh@20 -- # IFS=: 00:07:14.899 14:26:57 -- accel/accel.sh@20 -- # read -r var val 00:07:14.899 14:26:57 -- accel/accel.sh@21 -- # val= 00:07:14.899 14:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.899 14:26:57 -- accel/accel.sh@20 -- # IFS=: 00:07:14.899 14:26:57 -- accel/accel.sh@20 -- # read -r var val 00:07:14.899 14:26:57 -- accel/accel.sh@21 -- # val= 00:07:14.899 14:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.899 14:26:57 -- accel/accel.sh@20 -- # IFS=: 00:07:14.899 14:26:57 -- accel/accel.sh@20 -- # read -r var val 00:07:14.899 14:26:57 -- accel/accel.sh@21 -- # val= 00:07:14.899 14:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.899 14:26:57 -- accel/accel.sh@20 -- # IFS=: 00:07:14.899 14:26:57 -- accel/accel.sh@20 -- # read -r var val 00:07:14.899 14:26:57 -- accel/accel.sh@21 -- # val= 00:07:14.899 14:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.899 14:26:57 -- accel/accel.sh@20 -- # IFS=: 00:07:14.899 14:26:57 -- accel/accel.sh@20 -- # read -r var val 00:07:14.899 14:26:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:14.899 14:26:57 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:14.899 14:26:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.899 00:07:14.899 real 0m2.711s 00:07:14.899 user 0m2.443s 00:07:14.899 sys 0m0.275s 00:07:14.899 14:26:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.899 14:26:57 -- common/autotest_common.sh@10 -- # set +x 00:07:14.899 ************************************ 00:07:14.899 END TEST accel_dif_generate_copy 00:07:14.900 ************************************ 00:07:14.900 14:26:57 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:14.900 14:26:57 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:14.900 14:26:57 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:14.900 14:26:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.900 14:26:57 -- common/autotest_common.sh@10 -- # set +x 00:07:14.900 ************************************ 00:07:14.900 START TEST accel_comp 00:07:14.900 ************************************ 00:07:14.900 14:26:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:14.900 14:26:57 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.900 14:26:57 -- accel/accel.sh@17 -- # local accel_module 00:07:14.900 14:26:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:14.900 14:26:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:14.900 14:26:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.900 14:26:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.900 14:26:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.900 14:26:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.900 14:26:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.900 14:26:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.900 14:26:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.900 14:26:57 -- accel/accel.sh@42 -- # jq -r . 00:07:14.900 [2024-10-01 14:26:57.221379] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:14.900 [2024-10-01 14:26:57.221446] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid696343 ] 00:07:14.900 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.900 [2024-10-01 14:26:57.288530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.900 [2024-10-01 14:26:57.370336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.278 14:26:58 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:16.278 00:07:16.278 SPDK Configuration: 00:07:16.278 Core mask: 0x1 00:07:16.278 00:07:16.278 Accel Perf Configuration: 00:07:16.278 Workload Type: compress 00:07:16.278 Transfer size: 4096 bytes 00:07:16.278 Vector count 1 00:07:16.278 Module: software 00:07:16.278 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:16.278 Queue depth: 32 00:07:16.278 Allocate depth: 32 00:07:16.278 # threads/core: 1 00:07:16.278 Run time: 1 seconds 00:07:16.278 Verify: No 00:07:16.278 00:07:16.278 Running for 1 seconds... 00:07:16.278 00:07:16.278 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:16.278 ------------------------------------------------------------------------------------ 00:07:16.278 0,0 67424/s 281 MiB/s 0 0 00:07:16.278 ==================================================================================== 00:07:16.278 Total 67424/s 263 MiB/s 0 0' 00:07:16.278 14:26:58 -- accel/accel.sh@20 -- # IFS=: 00:07:16.278 14:26:58 -- accel/accel.sh@20 -- # read -r var val 00:07:16.278 14:26:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:16.278 14:26:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:16.278 14:26:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.278 14:26:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.278 14:26:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.278 14:26:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.278 14:26:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.278 14:26:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.278 14:26:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.278 14:26:58 -- accel/accel.sh@42 -- # jq -r . 00:07:16.278 [2024-10-01 14:26:58.589052] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:16.278 [2024-10-01 14:26:58.589156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid696521 ] 00:07:16.278 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.278 [2024-10-01 14:26:58.666748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.278 [2024-10-01 14:26:58.745577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.278 14:26:58 -- accel/accel.sh@21 -- # val= 00:07:16.278 14:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.278 14:26:58 -- accel/accel.sh@20 -- # IFS=: 00:07:16.278 14:26:58 -- accel/accel.sh@20 -- # read -r var val 00:07:16.278 14:26:58 -- accel/accel.sh@21 -- # val= 00:07:16.278 14:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.278 14:26:58 -- accel/accel.sh@20 -- # IFS=: 00:07:16.278 14:26:58 -- accel/accel.sh@20 -- # read -r var val 00:07:16.278 14:26:58 -- accel/accel.sh@21 -- # val= 00:07:16.278 14:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.278 14:26:58 -- accel/accel.sh@20 -- # IFS=: 00:07:16.278 14:26:58 -- accel/accel.sh@20 -- # read -r var val 00:07:16.278 14:26:58 -- accel/accel.sh@21 -- # val=0x1 00:07:16.278 14:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.278 14:26:58 -- accel/accel.sh@20 -- # IFS=: 00:07:16.278 14:26:58 -- accel/accel.sh@20 -- # read -r var val 00:07:16.278 14:26:58 -- accel/accel.sh@21 -- # val= 00:07:16.278 14:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.278 14:26:58 -- accel/accel.sh@20 -- # IFS=: 00:07:16.278 14:26:58 -- accel/accel.sh@20 -- # read -r var val 00:07:16.278 14:26:58 -- accel/accel.sh@21 -- # val= 00:07:16.278 14:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.278 14:26:58 -- accel/accel.sh@20 -- # IFS=: 00:07:16.278 14:26:58 -- accel/accel.sh@20 -- # read -r var val 00:07:16.278 14:26:58 -- accel/accel.sh@21 -- # val=compress 00:07:16.278 14:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.278 14:26:58 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:16.278 14:26:58 -- accel/accel.sh@20 -- # IFS=: 00:07:16.278 14:26:58 -- accel/accel.sh@20 -- # read -r var val 00:07:16.278 14:26:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:16.278 14:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.278 14:26:58 -- accel/accel.sh@20 -- # IFS=: 00:07:16.278 14:26:58 -- accel/accel.sh@20 -- # read -r var val 00:07:16.278 14:26:58 -- accel/accel.sh@21 -- # val= 00:07:16.538 14:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.538 14:26:58 -- accel/accel.sh@20 -- # IFS=: 00:07:16.538 14:26:58 -- accel/accel.sh@20 -- # read -r var val 00:07:16.538 14:26:58 -- accel/accel.sh@21 -- # val=software 00:07:16.538 14:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.538 14:26:58 -- accel/accel.sh@23 -- # accel_module=software 00:07:16.538 14:26:58 -- accel/accel.sh@20 -- # IFS=: 00:07:16.538 14:26:58 -- accel/accel.sh@20 -- # read -r var val 00:07:16.538 14:26:58 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:16.538 14:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.538 14:26:58 -- accel/accel.sh@20 -- # IFS=: 00:07:16.538 14:26:58 -- accel/accel.sh@20 -- # read -r var val 00:07:16.538 14:26:58 -- accel/accel.sh@21 -- # val=32 00:07:16.538 14:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.538 14:26:58 -- accel/accel.sh@20 -- # IFS=: 00:07:16.538 14:26:58 -- accel/accel.sh@20 -- # read -r var val 00:07:16.538 14:26:58 -- accel/accel.sh@21 -- # val=32 00:07:16.538 14:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.538 14:26:58 -- accel/accel.sh@20 -- # IFS=: 00:07:16.538 14:26:58 -- accel/accel.sh@20 -- # read -r var val 00:07:16.538 14:26:58 -- accel/accel.sh@21 -- # val=1 00:07:16.538 14:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.538 14:26:58 -- accel/accel.sh@20 -- # IFS=: 00:07:16.538 14:26:58 -- accel/accel.sh@20 -- # read -r var val 00:07:16.538 14:26:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:16.538 14:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.538 14:26:58 -- accel/accel.sh@20 -- # IFS=: 00:07:16.538 14:26:58 -- accel/accel.sh@20 -- # read -r var val 00:07:16.538 14:26:58 -- accel/accel.sh@21 -- # val=No 00:07:16.538 14:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.538 14:26:58 -- accel/accel.sh@20 -- # IFS=: 00:07:16.538 14:26:58 -- accel/accel.sh@20 -- # read -r var val 00:07:16.538 14:26:58 -- accel/accel.sh@21 -- # val= 00:07:16.538 14:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.538 14:26:58 -- accel/accel.sh@20 -- # IFS=: 00:07:16.538 14:26:58 -- accel/accel.sh@20 -- # read -r var val 00:07:16.538 14:26:58 -- accel/accel.sh@21 -- # val= 00:07:16.538 14:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.538 14:26:58 -- accel/accel.sh@20 -- # IFS=: 00:07:16.538 14:26:58 -- accel/accel.sh@20 -- # read -r var val 00:07:17.475 14:26:59 -- accel/accel.sh@21 -- # val= 00:07:17.475 14:26:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.475 14:26:59 -- accel/accel.sh@20 -- # IFS=: 00:07:17.475 14:26:59 -- accel/accel.sh@20 -- # read -r var val 00:07:17.475 14:26:59 -- accel/accel.sh@21 -- # val= 00:07:17.475 14:26:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.476 14:26:59 -- accel/accel.sh@20 -- # IFS=: 00:07:17.476 14:26:59 -- accel/accel.sh@20 -- # read -r var val 00:07:17.476 14:26:59 -- accel/accel.sh@21 -- # val= 00:07:17.476 14:26:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.476 14:26:59 -- accel/accel.sh@20 -- # IFS=: 00:07:17.476 14:26:59 -- accel/accel.sh@20 -- # read -r var val 00:07:17.476 14:26:59 -- accel/accel.sh@21 -- # val= 00:07:17.476 14:26:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.476 14:26:59 -- accel/accel.sh@20 -- # IFS=: 00:07:17.476 14:26:59 -- accel/accel.sh@20 -- # read -r var val 00:07:17.476 14:26:59 -- accel/accel.sh@21 -- # val= 00:07:17.476 14:26:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.476 14:26:59 -- accel/accel.sh@20 -- # IFS=: 00:07:17.476 14:26:59 -- accel/accel.sh@20 -- # read -r var val 00:07:17.476 14:26:59 -- accel/accel.sh@21 -- # val= 00:07:17.476 14:26:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.476 14:26:59 -- accel/accel.sh@20 -- # IFS=: 00:07:17.476 14:26:59 -- accel/accel.sh@20 -- # read -r var val 00:07:17.476 14:26:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.476 14:26:59 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:17.476 14:26:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.476 00:07:17.476 real 0m2.740s 00:07:17.476 user 0m2.461s 00:07:17.476 sys 0m0.284s 00:07:17.476 14:26:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.476 14:26:59 -- common/autotest_common.sh@10 -- # set +x 00:07:17.476 ************************************ 00:07:17.476 END TEST accel_comp 00:07:17.476 ************************************ 00:07:17.476 14:26:59 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:17.476 14:26:59 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:17.476 14:26:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.476 14:26:59 -- common/autotest_common.sh@10 -- # set +x 00:07:17.476 ************************************ 00:07:17.476 START TEST accel_decomp 00:07:17.476 ************************************ 00:07:17.476 14:26:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:17.476 14:26:59 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.476 14:26:59 -- accel/accel.sh@17 -- # local accel_module 00:07:17.476 14:26:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:17.735 14:26:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:17.735 14:26:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.735 14:26:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.735 14:26:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.735 14:26:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.735 14:26:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.735 14:27:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.735 14:27:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.735 14:27:00 -- accel/accel.sh@42 -- # jq -r . 00:07:17.735 [2024-10-01 14:27:00.017196] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:17.735 [2024-10-01 14:27:00.017294] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid696722 ] 00:07:17.735 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.735 [2024-10-01 14:27:00.095804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.735 [2024-10-01 14:27:00.186231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.113 14:27:01 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:19.113 00:07:19.113 SPDK Configuration: 00:07:19.113 Core mask: 0x1 00:07:19.113 00:07:19.113 Accel Perf Configuration: 00:07:19.113 Workload Type: decompress 00:07:19.113 Transfer size: 4096 bytes 00:07:19.113 Vector count 1 00:07:19.113 Module: software 00:07:19.113 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:19.113 Queue depth: 32 00:07:19.113 Allocate depth: 32 00:07:19.113 # threads/core: 1 00:07:19.113 Run time: 1 seconds 00:07:19.113 Verify: Yes 00:07:19.113 00:07:19.113 Running for 1 seconds... 00:07:19.114 00:07:19.114 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:19.114 ------------------------------------------------------------------------------------ 00:07:19.114 0,0 87104/s 160 MiB/s 0 0 00:07:19.114 ==================================================================================== 00:07:19.114 Total 87104/s 340 MiB/s 0 0' 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # IFS=: 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # read -r var val 00:07:19.114 14:27:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:19.114 14:27:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:19.114 14:27:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.114 14:27:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.114 14:27:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.114 14:27:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.114 14:27:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.114 14:27:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.114 14:27:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.114 14:27:01 -- accel/accel.sh@42 -- # jq -r . 00:07:19.114 [2024-10-01 14:27:01.404069] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:19.114 [2024-10-01 14:27:01.404161] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid696995 ] 00:07:19.114 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.114 [2024-10-01 14:27:01.479648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.114 [2024-10-01 14:27:01.560396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.114 14:27:01 -- accel/accel.sh@21 -- # val= 00:07:19.114 14:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # IFS=: 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # read -r var val 00:07:19.114 14:27:01 -- accel/accel.sh@21 -- # val= 00:07:19.114 14:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # IFS=: 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # read -r var val 00:07:19.114 14:27:01 -- accel/accel.sh@21 -- # val= 00:07:19.114 14:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # IFS=: 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # read -r var val 00:07:19.114 14:27:01 -- accel/accel.sh@21 -- # val=0x1 00:07:19.114 14:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # IFS=: 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # read -r var val 00:07:19.114 14:27:01 -- accel/accel.sh@21 -- # val= 00:07:19.114 14:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # IFS=: 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # read -r var val 00:07:19.114 14:27:01 -- accel/accel.sh@21 -- # val= 00:07:19.114 14:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # IFS=: 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # read -r var val 00:07:19.114 14:27:01 -- accel/accel.sh@21 -- # val=decompress 00:07:19.114 14:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.114 14:27:01 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # IFS=: 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # read -r var val 00:07:19.114 14:27:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:19.114 14:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # IFS=: 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # read -r var val 00:07:19.114 14:27:01 -- accel/accel.sh@21 -- # val= 00:07:19.114 14:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # IFS=: 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # read -r var val 00:07:19.114 14:27:01 -- accel/accel.sh@21 -- # val=software 00:07:19.114 14:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.114 14:27:01 -- accel/accel.sh@23 -- # accel_module=software 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # IFS=: 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # read -r var val 00:07:19.114 14:27:01 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:19.114 14:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # IFS=: 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # read -r var val 00:07:19.114 14:27:01 -- accel/accel.sh@21 -- # val=32 00:07:19.114 14:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # IFS=: 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # read -r var val 00:07:19.114 14:27:01 -- accel/accel.sh@21 -- # val=32 00:07:19.114 14:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # IFS=: 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # read -r var val 00:07:19.114 14:27:01 -- accel/accel.sh@21 -- # val=1 00:07:19.114 14:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # IFS=: 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # read -r var val 00:07:19.114 14:27:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:19.114 14:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # IFS=: 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # read -r var val 00:07:19.114 14:27:01 -- accel/accel.sh@21 -- # val=Yes 00:07:19.114 14:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # IFS=: 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # read -r var val 00:07:19.114 14:27:01 -- accel/accel.sh@21 -- # val= 00:07:19.114 14:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # IFS=: 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # read -r var val 00:07:19.114 14:27:01 -- accel/accel.sh@21 -- # val= 00:07:19.114 14:27:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # IFS=: 00:07:19.114 14:27:01 -- accel/accel.sh@20 -- # read -r var val 00:07:20.494 14:27:02 -- accel/accel.sh@21 -- # val= 00:07:20.494 14:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.494 14:27:02 -- accel/accel.sh@20 -- # IFS=: 00:07:20.494 14:27:02 -- accel/accel.sh@20 -- # read -r var val 00:07:20.494 14:27:02 -- accel/accel.sh@21 -- # val= 00:07:20.494 14:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.494 14:27:02 -- accel/accel.sh@20 -- # IFS=: 00:07:20.494 14:27:02 -- accel/accel.sh@20 -- # read -r var val 00:07:20.494 14:27:02 -- accel/accel.sh@21 -- # val= 00:07:20.494 14:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.494 14:27:02 -- accel/accel.sh@20 -- # IFS=: 00:07:20.494 14:27:02 -- accel/accel.sh@20 -- # read -r var val 00:07:20.494 14:27:02 -- accel/accel.sh@21 -- # val= 00:07:20.494 14:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.494 14:27:02 -- accel/accel.sh@20 -- # IFS=: 00:07:20.494 14:27:02 -- accel/accel.sh@20 -- # read -r var val 00:07:20.494 14:27:02 -- accel/accel.sh@21 -- # val= 00:07:20.494 14:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.494 14:27:02 -- accel/accel.sh@20 -- # IFS=: 00:07:20.494 14:27:02 -- accel/accel.sh@20 -- # read -r var val 00:07:20.494 14:27:02 -- accel/accel.sh@21 -- # val= 00:07:20.494 14:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.494 14:27:02 -- accel/accel.sh@20 -- # IFS=: 00:07:20.494 14:27:02 -- accel/accel.sh@20 -- # read -r var val 00:07:20.494 14:27:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:20.494 14:27:02 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:20.494 14:27:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.494 00:07:20.494 real 0m2.766s 00:07:20.494 user 0m2.482s 00:07:20.494 sys 0m0.288s 00:07:20.494 14:27:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.494 14:27:02 -- common/autotest_common.sh@10 -- # set +x 00:07:20.494 ************************************ 00:07:20.494 END TEST accel_decomp 00:07:20.494 ************************************ 00:07:20.494 14:27:02 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:20.494 14:27:02 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:20.494 14:27:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.494 14:27:02 -- common/autotest_common.sh@10 -- # set +x 00:07:20.494 ************************************ 00:07:20.494 START TEST accel_decmop_full 00:07:20.494 ************************************ 00:07:20.494 14:27:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:20.494 14:27:02 -- accel/accel.sh@16 -- # local accel_opc 00:07:20.494 14:27:02 -- accel/accel.sh@17 -- # local accel_module 00:07:20.494 14:27:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:20.494 14:27:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:20.494 14:27:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.494 14:27:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.494 14:27:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.494 14:27:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.494 14:27:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.494 14:27:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.494 14:27:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.494 14:27:02 -- accel/accel.sh@42 -- # jq -r . 00:07:20.494 [2024-10-01 14:27:02.822517] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:20.494 [2024-10-01 14:27:02.822581] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid697221 ] 00:07:20.494 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.494 [2024-10-01 14:27:02.888246] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.494 [2024-10-01 14:27:02.969324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.873 14:27:04 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:21.873 00:07:21.873 SPDK Configuration: 00:07:21.873 Core mask: 0x1 00:07:21.873 00:07:21.873 Accel Perf Configuration: 00:07:21.873 Workload Type: decompress 00:07:21.873 Transfer size: 111250 bytes 00:07:21.873 Vector count 1 00:07:21.873 Module: software 00:07:21.873 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:21.873 Queue depth: 32 00:07:21.873 Allocate depth: 32 00:07:21.873 # threads/core: 1 00:07:21.873 Run time: 1 seconds 00:07:21.873 Verify: Yes 00:07:21.873 00:07:21.873 Running for 1 seconds... 00:07:21.873 00:07:21.873 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:21.873 ------------------------------------------------------------------------------------ 00:07:21.873 0,0 5792/s 239 MiB/s 0 0 00:07:21.873 ==================================================================================== 00:07:21.873 Total 5792/s 614 MiB/s 0 0' 00:07:21.873 14:27:04 -- accel/accel.sh@20 -- # IFS=: 00:07:21.873 14:27:04 -- accel/accel.sh@20 -- # read -r var val 00:07:21.873 14:27:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:21.873 14:27:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:21.873 14:27:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.873 14:27:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.873 14:27:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.873 14:27:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.873 14:27:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.873 14:27:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.873 14:27:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.873 14:27:04 -- accel/accel.sh@42 -- # jq -r . 00:07:21.873 [2024-10-01 14:27:04.194180] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:21.873 [2024-10-01 14:27:04.194267] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid697433 ] 00:07:21.873 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.873 [2024-10-01 14:27:04.270269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.873 [2024-10-01 14:27:04.352872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.133 14:27:04 -- accel/accel.sh@21 -- # val= 00:07:22.133 14:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # IFS=: 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # read -r var val 00:07:22.133 14:27:04 -- accel/accel.sh@21 -- # val= 00:07:22.133 14:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # IFS=: 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # read -r var val 00:07:22.133 14:27:04 -- accel/accel.sh@21 -- # val= 00:07:22.133 14:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # IFS=: 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # read -r var val 00:07:22.133 14:27:04 -- accel/accel.sh@21 -- # val=0x1 00:07:22.133 14:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # IFS=: 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # read -r var val 00:07:22.133 14:27:04 -- accel/accel.sh@21 -- # val= 00:07:22.133 14:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # IFS=: 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # read -r var val 00:07:22.133 14:27:04 -- accel/accel.sh@21 -- # val= 00:07:22.133 14:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # IFS=: 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # read -r var val 00:07:22.133 14:27:04 -- accel/accel.sh@21 -- # val=decompress 00:07:22.133 14:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.133 14:27:04 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # IFS=: 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # read -r var val 00:07:22.133 14:27:04 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:22.133 14:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # IFS=: 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # read -r var val 00:07:22.133 14:27:04 -- accel/accel.sh@21 -- # val= 00:07:22.133 14:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # IFS=: 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # read -r var val 00:07:22.133 14:27:04 -- accel/accel.sh@21 -- # val=software 00:07:22.133 14:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.133 14:27:04 -- accel/accel.sh@23 -- # accel_module=software 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # IFS=: 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # read -r var val 00:07:22.133 14:27:04 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:22.133 14:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # IFS=: 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # read -r var val 00:07:22.133 14:27:04 -- accel/accel.sh@21 -- # val=32 00:07:22.133 14:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # IFS=: 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # read -r var val 00:07:22.133 14:27:04 -- accel/accel.sh@21 -- # val=32 00:07:22.133 14:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # IFS=: 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # read -r var val 00:07:22.133 14:27:04 -- accel/accel.sh@21 -- # val=1 00:07:22.133 14:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # IFS=: 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # read -r var val 00:07:22.133 14:27:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:22.133 14:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # IFS=: 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # read -r var val 00:07:22.133 14:27:04 -- accel/accel.sh@21 -- # val=Yes 00:07:22.133 14:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # IFS=: 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # read -r var val 00:07:22.133 14:27:04 -- accel/accel.sh@21 -- # val= 00:07:22.133 14:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # IFS=: 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # read -r var val 00:07:22.133 14:27:04 -- accel/accel.sh@21 -- # val= 00:07:22.133 14:27:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # IFS=: 00:07:22.133 14:27:04 -- accel/accel.sh@20 -- # read -r var val 00:07:23.072 14:27:05 -- accel/accel.sh@21 -- # val= 00:07:23.072 14:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.072 14:27:05 -- accel/accel.sh@20 -- # IFS=: 00:07:23.072 14:27:05 -- accel/accel.sh@20 -- # read -r var val 00:07:23.072 14:27:05 -- accel/accel.sh@21 -- # val= 00:07:23.072 14:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.072 14:27:05 -- accel/accel.sh@20 -- # IFS=: 00:07:23.072 14:27:05 -- accel/accel.sh@20 -- # read -r var val 00:07:23.072 14:27:05 -- accel/accel.sh@21 -- # val= 00:07:23.072 14:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.072 14:27:05 -- accel/accel.sh@20 -- # IFS=: 00:07:23.072 14:27:05 -- accel/accel.sh@20 -- # read -r var val 00:07:23.072 14:27:05 -- accel/accel.sh@21 -- # val= 00:07:23.072 14:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.072 14:27:05 -- accel/accel.sh@20 -- # IFS=: 00:07:23.072 14:27:05 -- accel/accel.sh@20 -- # read -r var val 00:07:23.072 14:27:05 -- accel/accel.sh@21 -- # val= 00:07:23.072 14:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.072 14:27:05 -- accel/accel.sh@20 -- # IFS=: 00:07:23.072 14:27:05 -- accel/accel.sh@20 -- # read -r var val 00:07:23.072 14:27:05 -- accel/accel.sh@21 -- # val= 00:07:23.072 14:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.072 14:27:05 -- accel/accel.sh@20 -- # IFS=: 00:07:23.072 14:27:05 -- accel/accel.sh@20 -- # read -r var val 00:07:23.072 14:27:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:23.072 14:27:05 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:23.072 14:27:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.072 00:07:23.072 real 0m2.750s 00:07:23.072 user 0m2.469s 00:07:23.072 sys 0m0.283s 00:07:23.072 14:27:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.072 14:27:05 -- common/autotest_common.sh@10 -- # set +x 00:07:23.072 ************************************ 00:07:23.072 END TEST accel_decmop_full 00:07:23.072 ************************************ 00:07:23.332 14:27:05 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:23.332 14:27:05 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:23.332 14:27:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:23.332 14:27:05 -- common/autotest_common.sh@10 -- # set +x 00:07:23.332 ************************************ 00:07:23.332 START TEST accel_decomp_mcore 00:07:23.332 ************************************ 00:07:23.332 14:27:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:23.332 14:27:05 -- accel/accel.sh@16 -- # local accel_opc 00:07:23.332 14:27:05 -- accel/accel.sh@17 -- # local accel_module 00:07:23.332 14:27:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:23.332 14:27:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:23.332 14:27:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.332 14:27:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.332 14:27:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.332 14:27:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.332 14:27:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.332 14:27:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.332 14:27:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.332 14:27:05 -- accel/accel.sh@42 -- # jq -r . 00:07:23.332 [2024-10-01 14:27:05.622395] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:23.332 [2024-10-01 14:27:05.622463] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid697937 ] 00:07:23.332 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.332 [2024-10-01 14:27:05.694799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:23.332 [2024-10-01 14:27:05.780041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.332 [2024-10-01 14:27:05.780128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.332 [2024-10-01 14:27:05.780207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.332 [2024-10-01 14:27:05.780209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.713 14:27:06 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:24.713 00:07:24.713 SPDK Configuration: 00:07:24.713 Core mask: 0xf 00:07:24.713 00:07:24.713 Accel Perf Configuration: 00:07:24.713 Workload Type: decompress 00:07:24.713 Transfer size: 4096 bytes 00:07:24.713 Vector count 1 00:07:24.713 Module: software 00:07:24.713 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:24.713 Queue depth: 32 00:07:24.713 Allocate depth: 32 00:07:24.713 # threads/core: 1 00:07:24.713 Run time: 1 seconds 00:07:24.713 Verify: Yes 00:07:24.713 00:07:24.713 Running for 1 seconds... 00:07:24.713 00:07:24.713 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:24.713 ------------------------------------------------------------------------------------ 00:07:24.713 0,0 73888/s 136 MiB/s 0 0 00:07:24.713 3,0 74080/s 136 MiB/s 0 0 00:07:24.713 2,0 74240/s 136 MiB/s 0 0 00:07:24.713 1,0 74176/s 136 MiB/s 0 0 00:07:24.713 ==================================================================================== 00:07:24.713 Total 296384/s 1157 MiB/s 0 0' 00:07:24.713 14:27:06 -- accel/accel.sh@20 -- # IFS=: 00:07:24.713 14:27:06 -- accel/accel.sh@20 -- # read -r var val 00:07:24.713 14:27:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:24.713 14:27:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:24.713 14:27:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.713 14:27:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.713 14:27:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.713 14:27:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.713 14:27:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.713 14:27:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.713 14:27:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.713 14:27:06 -- accel/accel.sh@42 -- # jq -r . 00:07:24.713 [2024-10-01 14:27:06.994773] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:24.713 [2024-10-01 14:27:06.994879] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid698186 ] 00:07:24.713 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.713 [2024-10-01 14:27:07.067900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:24.713 [2024-10-01 14:27:07.147693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.713 [2024-10-01 14:27:07.147785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.713 [2024-10-01 14:27:07.147808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.713 [2024-10-01 14:27:07.147810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.713 14:27:07 -- accel/accel.sh@21 -- # val= 00:07:24.713 14:27:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.713 14:27:07 -- accel/accel.sh@20 -- # IFS=: 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # read -r var val 00:07:24.714 14:27:07 -- accel/accel.sh@21 -- # val= 00:07:24.714 14:27:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # IFS=: 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # read -r var val 00:07:24.714 14:27:07 -- accel/accel.sh@21 -- # val= 00:07:24.714 14:27:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # IFS=: 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # read -r var val 00:07:24.714 14:27:07 -- accel/accel.sh@21 -- # val=0xf 00:07:24.714 14:27:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # IFS=: 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # read -r var val 00:07:24.714 14:27:07 -- accel/accel.sh@21 -- # val= 00:07:24.714 14:27:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # IFS=: 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # read -r var val 00:07:24.714 14:27:07 -- accel/accel.sh@21 -- # val= 00:07:24.714 14:27:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # IFS=: 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # read -r var val 00:07:24.714 14:27:07 -- accel/accel.sh@21 -- # val=decompress 00:07:24.714 14:27:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.714 14:27:07 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # IFS=: 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # read -r var val 00:07:24.714 14:27:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:24.714 14:27:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # IFS=: 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # read -r var val 00:07:24.714 14:27:07 -- accel/accel.sh@21 -- # val= 00:07:24.714 14:27:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # IFS=: 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # read -r var val 00:07:24.714 14:27:07 -- accel/accel.sh@21 -- # val=software 00:07:24.714 14:27:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.714 14:27:07 -- accel/accel.sh@23 -- # accel_module=software 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # IFS=: 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # read -r var val 00:07:24.714 14:27:07 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:24.714 14:27:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # IFS=: 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # read -r var val 00:07:24.714 14:27:07 -- accel/accel.sh@21 -- # val=32 00:07:24.714 14:27:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # IFS=: 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # read -r var val 00:07:24.714 14:27:07 -- accel/accel.sh@21 -- # val=32 00:07:24.714 14:27:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # IFS=: 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # read -r var val 00:07:24.714 14:27:07 -- accel/accel.sh@21 -- # val=1 00:07:24.714 14:27:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # IFS=: 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # read -r var val 00:07:24.714 14:27:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:24.714 14:27:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # IFS=: 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # read -r var val 00:07:24.714 14:27:07 -- accel/accel.sh@21 -- # val=Yes 00:07:24.714 14:27:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # IFS=: 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # read -r var val 00:07:24.714 14:27:07 -- accel/accel.sh@21 -- # val= 00:07:24.714 14:27:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # IFS=: 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # read -r var val 00:07:24.714 14:27:07 -- accel/accel.sh@21 -- # val= 00:07:24.714 14:27:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # IFS=: 00:07:24.714 14:27:07 -- accel/accel.sh@20 -- # read -r var val 00:07:26.094 14:27:08 -- accel/accel.sh@21 -- # val= 00:07:26.094 14:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.094 14:27:08 -- accel/accel.sh@20 -- # IFS=: 00:07:26.094 14:27:08 -- accel/accel.sh@20 -- # read -r var val 00:07:26.094 14:27:08 -- accel/accel.sh@21 -- # val= 00:07:26.094 14:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.094 14:27:08 -- accel/accel.sh@20 -- # IFS=: 00:07:26.094 14:27:08 -- accel/accel.sh@20 -- # read -r var val 00:07:26.094 14:27:08 -- accel/accel.sh@21 -- # val= 00:07:26.094 14:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.094 14:27:08 -- accel/accel.sh@20 -- # IFS=: 00:07:26.094 14:27:08 -- accel/accel.sh@20 -- # read -r var val 00:07:26.094 14:27:08 -- accel/accel.sh@21 -- # val= 00:07:26.094 14:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.094 14:27:08 -- accel/accel.sh@20 -- # IFS=: 00:07:26.094 14:27:08 -- accel/accel.sh@20 -- # read -r var val 00:07:26.094 14:27:08 -- accel/accel.sh@21 -- # val= 00:07:26.094 14:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.094 14:27:08 -- accel/accel.sh@20 -- # IFS=: 00:07:26.094 14:27:08 -- accel/accel.sh@20 -- # read -r var val 00:07:26.094 14:27:08 -- accel/accel.sh@21 -- # val= 00:07:26.094 14:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.094 14:27:08 -- accel/accel.sh@20 -- # IFS=: 00:07:26.094 14:27:08 -- accel/accel.sh@20 -- # read -r var val 00:07:26.094 14:27:08 -- accel/accel.sh@21 -- # val= 00:07:26.094 14:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.094 14:27:08 -- accel/accel.sh@20 -- # IFS=: 00:07:26.094 14:27:08 -- accel/accel.sh@20 -- # read -r var val 00:07:26.094 14:27:08 -- accel/accel.sh@21 -- # val= 00:07:26.094 14:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.094 14:27:08 -- accel/accel.sh@20 -- # IFS=: 00:07:26.094 14:27:08 -- accel/accel.sh@20 -- # read -r var val 00:07:26.094 14:27:08 -- accel/accel.sh@21 -- # val= 00:07:26.094 14:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.094 14:27:08 -- accel/accel.sh@20 -- # IFS=: 00:07:26.094 14:27:08 -- accel/accel.sh@20 -- # read -r var val 00:07:26.094 14:27:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:26.094 14:27:08 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:26.094 14:27:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.094 00:07:26.094 real 0m2.731s 00:07:26.094 user 0m9.154s 00:07:26.094 sys 0m0.277s 00:07:26.094 14:27:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.094 14:27:08 -- common/autotest_common.sh@10 -- # set +x 00:07:26.094 ************************************ 00:07:26.094 END TEST accel_decomp_mcore 00:07:26.094 ************************************ 00:07:26.094 14:27:08 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:26.094 14:27:08 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:26.094 14:27:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:26.094 14:27:08 -- common/autotest_common.sh@10 -- # set +x 00:07:26.094 ************************************ 00:07:26.094 START TEST accel_decomp_full_mcore 00:07:26.094 ************************************ 00:07:26.094 14:27:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:26.094 14:27:08 -- accel/accel.sh@16 -- # local accel_opc 00:07:26.094 14:27:08 -- accel/accel.sh@17 -- # local accel_module 00:07:26.094 14:27:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:26.094 14:27:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:26.094 14:27:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.094 14:27:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.094 14:27:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.094 14:27:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.094 14:27:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.094 14:27:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.094 14:27:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.094 14:27:08 -- accel/accel.sh@42 -- # jq -r . 00:07:26.095 [2024-10-01 14:27:08.412074] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:26.095 [2024-10-01 14:27:08.412170] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid698389 ] 00:07:26.095 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.095 [2024-10-01 14:27:08.489805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.095 [2024-10-01 14:27:08.579348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.095 [2024-10-01 14:27:08.579435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.095 [2024-10-01 14:27:08.579496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.095 [2024-10-01 14:27:08.579497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.475 14:27:09 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:27.475 00:07:27.475 SPDK Configuration: 00:07:27.475 Core mask: 0xf 00:07:27.475 00:07:27.475 Accel Perf Configuration: 00:07:27.475 Workload Type: decompress 00:07:27.475 Transfer size: 111250 bytes 00:07:27.475 Vector count 1 00:07:27.475 Module: software 00:07:27.475 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:27.476 Queue depth: 32 00:07:27.476 Allocate depth: 32 00:07:27.476 # threads/core: 1 00:07:27.476 Run time: 1 seconds 00:07:27.476 Verify: Yes 00:07:27.476 00:07:27.476 Running for 1 seconds... 00:07:27.476 00:07:27.476 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:27.476 ------------------------------------------------------------------------------------ 00:07:27.476 0,0 5472/s 226 MiB/s 0 0 00:07:27.476 3,0 5664/s 233 MiB/s 0 0 00:07:27.476 2,0 5664/s 233 MiB/s 0 0 00:07:27.476 1,0 5664/s 233 MiB/s 0 0 00:07:27.476 ==================================================================================== 00:07:27.476 Total 22464/s 2383 MiB/s 0 0' 00:07:27.476 14:27:09 -- accel/accel.sh@20 -- # IFS=: 00:07:27.476 14:27:09 -- accel/accel.sh@20 -- # read -r var val 00:07:27.476 14:27:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:27.476 14:27:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:27.476 14:27:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.476 14:27:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.476 14:27:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.476 14:27:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.476 14:27:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.476 14:27:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.476 14:27:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.476 14:27:09 -- accel/accel.sh@42 -- # jq -r . 00:07:27.476 [2024-10-01 14:27:09.813928] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:27.476 [2024-10-01 14:27:09.814015] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid698572 ] 00:07:27.476 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.476 [2024-10-01 14:27:09.890084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:27.476 [2024-10-01 14:27:09.973599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.476 [2024-10-01 14:27:09.973687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.476 [2024-10-01 14:27:09.973765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.476 [2024-10-01 14:27:09.973767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.736 14:27:10 -- accel/accel.sh@21 -- # val= 00:07:27.736 14:27:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.736 14:27:10 -- accel/accel.sh@20 -- # IFS=: 00:07:27.736 14:27:10 -- accel/accel.sh@20 -- # read -r var val 00:07:27.736 14:27:10 -- accel/accel.sh@21 -- # val= 00:07:27.736 14:27:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.736 14:27:10 -- accel/accel.sh@20 -- # IFS=: 00:07:27.736 14:27:10 -- accel/accel.sh@20 -- # read -r var val 00:07:27.736 14:27:10 -- accel/accel.sh@21 -- # val= 00:07:27.736 14:27:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.736 14:27:10 -- accel/accel.sh@20 -- # IFS=: 00:07:27.736 14:27:10 -- accel/accel.sh@20 -- # read -r var val 00:07:27.736 14:27:10 -- accel/accel.sh@21 -- # val=0xf 00:07:27.736 14:27:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.736 14:27:10 -- accel/accel.sh@20 -- # IFS=: 00:07:27.736 14:27:10 -- accel/accel.sh@20 -- # read -r var val 00:07:27.736 14:27:10 -- accel/accel.sh@21 -- # val= 00:07:27.736 14:27:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.736 14:27:10 -- accel/accel.sh@20 -- # IFS=: 00:07:27.736 14:27:10 -- accel/accel.sh@20 -- # read -r var val 00:07:27.736 14:27:10 -- accel/accel.sh@21 -- # val= 00:07:27.737 14:27:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.737 14:27:10 -- accel/accel.sh@20 -- # IFS=: 00:07:27.737 14:27:10 -- accel/accel.sh@20 -- # read -r var val 00:07:27.737 14:27:10 -- accel/accel.sh@21 -- # val=decompress 00:07:27.737 14:27:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.737 14:27:10 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:27.737 14:27:10 -- accel/accel.sh@20 -- # IFS=: 00:07:27.737 14:27:10 -- accel/accel.sh@20 -- # read -r var val 00:07:27.737 14:27:10 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:27.737 14:27:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.737 14:27:10 -- accel/accel.sh@20 -- # IFS=: 00:07:27.737 14:27:10 -- accel/accel.sh@20 -- # read -r var val 00:07:27.737 14:27:10 -- accel/accel.sh@21 -- # val= 00:07:27.737 14:27:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.737 14:27:10 -- accel/accel.sh@20 -- # IFS=: 00:07:27.737 14:27:10 -- accel/accel.sh@20 -- # read -r var val 00:07:27.737 14:27:10 -- accel/accel.sh@21 -- # val=software 00:07:27.737 14:27:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.737 14:27:10 -- accel/accel.sh@23 -- # accel_module=software 00:07:27.737 14:27:10 -- accel/accel.sh@20 -- # IFS=: 00:07:27.737 14:27:10 -- accel/accel.sh@20 -- # read -r var val 00:07:27.737 14:27:10 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:27.737 14:27:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.737 14:27:10 -- accel/accel.sh@20 -- # IFS=: 00:07:27.737 14:27:10 -- accel/accel.sh@20 -- # read -r var val 00:07:27.737 14:27:10 -- accel/accel.sh@21 -- # val=32 00:07:27.737 14:27:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.737 14:27:10 -- accel/accel.sh@20 -- # IFS=: 00:07:27.737 14:27:10 -- accel/accel.sh@20 -- # read -r var val 00:07:27.737 14:27:10 -- accel/accel.sh@21 -- # val=32 00:07:27.737 14:27:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.737 14:27:10 -- accel/accel.sh@20 -- # IFS=: 00:07:27.737 14:27:10 -- accel/accel.sh@20 -- # read -r var val 00:07:27.737 14:27:10 -- accel/accel.sh@21 -- # val=1 00:07:27.737 14:27:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.737 14:27:10 -- accel/accel.sh@20 -- # IFS=: 00:07:27.737 14:27:10 -- accel/accel.sh@20 -- # read -r var val 00:07:27.737 14:27:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:27.737 14:27:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.737 14:27:10 -- accel/accel.sh@20 -- # IFS=: 00:07:27.737 14:27:10 -- accel/accel.sh@20 -- # read -r var val 00:07:27.737 14:27:10 -- accel/accel.sh@21 -- # val=Yes 00:07:27.737 14:27:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.737 14:27:10 -- accel/accel.sh@20 -- # IFS=: 00:07:27.737 14:27:10 -- accel/accel.sh@20 -- # read -r var val 00:07:27.737 14:27:10 -- accel/accel.sh@21 -- # val= 00:07:27.737 14:27:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.737 14:27:10 -- accel/accel.sh@20 -- # IFS=: 00:07:27.737 14:27:10 -- accel/accel.sh@20 -- # read -r var val 00:07:27.737 14:27:10 -- accel/accel.sh@21 -- # val= 00:07:27.737 14:27:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.737 14:27:10 -- accel/accel.sh@20 -- # IFS=: 00:07:27.737 14:27:10 -- accel/accel.sh@20 -- # read -r var val 00:07:28.675 14:27:11 -- accel/accel.sh@21 -- # val= 00:07:28.675 14:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.675 14:27:11 -- accel/accel.sh@20 -- # IFS=: 00:07:28.675 14:27:11 -- accel/accel.sh@20 -- # read -r var val 00:07:28.675 14:27:11 -- accel/accel.sh@21 -- # val= 00:07:28.675 14:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.675 14:27:11 -- accel/accel.sh@20 -- # IFS=: 00:07:28.675 14:27:11 -- accel/accel.sh@20 -- # read -r var val 00:07:28.675 14:27:11 -- accel/accel.sh@21 -- # val= 00:07:28.675 14:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.675 14:27:11 -- accel/accel.sh@20 -- # IFS=: 00:07:28.675 14:27:11 -- accel/accel.sh@20 -- # read -r var val 00:07:28.675 14:27:11 -- accel/accel.sh@21 -- # val= 00:07:28.675 14:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.675 14:27:11 -- accel/accel.sh@20 -- # IFS=: 00:07:28.675 14:27:11 -- accel/accel.sh@20 -- # read -r var val 00:07:28.675 14:27:11 -- accel/accel.sh@21 -- # val= 00:07:28.675 14:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.675 14:27:11 -- accel/accel.sh@20 -- # IFS=: 00:07:28.675 14:27:11 -- accel/accel.sh@20 -- # read -r var val 00:07:28.675 14:27:11 -- accel/accel.sh@21 -- # val= 00:07:28.675 14:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.675 14:27:11 -- accel/accel.sh@20 -- # IFS=: 00:07:28.675 14:27:11 -- accel/accel.sh@20 -- # read -r var val 00:07:28.675 14:27:11 -- accel/accel.sh@21 -- # val= 00:07:28.675 14:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.675 14:27:11 -- accel/accel.sh@20 -- # IFS=: 00:07:28.675 14:27:11 -- accel/accel.sh@20 -- # read -r var val 00:07:28.675 14:27:11 -- accel/accel.sh@21 -- # val= 00:07:28.675 14:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.675 14:27:11 -- accel/accel.sh@20 -- # IFS=: 00:07:28.675 14:27:11 -- accel/accel.sh@20 -- # read -r var val 00:07:28.675 14:27:11 -- accel/accel.sh@21 -- # val= 00:07:28.675 14:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.675 14:27:11 -- accel/accel.sh@20 -- # IFS=: 00:07:28.675 14:27:11 -- accel/accel.sh@20 -- # read -r var val 00:07:28.675 14:27:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:28.675 14:27:11 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:28.675 14:27:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.675 00:07:28.675 real 0m2.809s 00:07:28.675 user 0m9.310s 00:07:28.675 sys 0m0.309s 00:07:28.675 14:27:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.934 14:27:11 -- common/autotest_common.sh@10 -- # set +x 00:07:28.934 ************************************ 00:07:28.934 END TEST accel_decomp_full_mcore 00:07:28.934 ************************************ 00:07:28.934 14:27:11 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:28.934 14:27:11 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:28.934 14:27:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:28.934 14:27:11 -- common/autotest_common.sh@10 -- # set +x 00:07:28.934 ************************************ 00:07:28.934 START TEST accel_decomp_mthread 00:07:28.934 ************************************ 00:07:28.934 14:27:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:28.934 14:27:11 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.934 14:27:11 -- accel/accel.sh@17 -- # local accel_module 00:07:28.934 14:27:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:28.934 14:27:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.934 14:27:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:28.934 14:27:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.934 14:27:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.934 14:27:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.934 14:27:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.934 14:27:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.934 14:27:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.934 14:27:11 -- accel/accel.sh@42 -- # jq -r . 00:07:28.934 [2024-10-01 14:27:11.269971] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:28.934 [2024-10-01 14:27:11.270060] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid698776 ] 00:07:28.934 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.934 [2024-10-01 14:27:11.344530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.934 [2024-10-01 14:27:11.426048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.306 14:27:12 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:30.306 00:07:30.306 SPDK Configuration: 00:07:30.306 Core mask: 0x1 00:07:30.306 00:07:30.306 Accel Perf Configuration: 00:07:30.306 Workload Type: decompress 00:07:30.306 Transfer size: 4096 bytes 00:07:30.306 Vector count 1 00:07:30.306 Module: software 00:07:30.306 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:30.306 Queue depth: 32 00:07:30.306 Allocate depth: 32 00:07:30.306 # threads/core: 2 00:07:30.306 Run time: 1 seconds 00:07:30.306 Verify: Yes 00:07:30.306 00:07:30.306 Running for 1 seconds... 00:07:30.306 00:07:30.306 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:30.306 ------------------------------------------------------------------------------------ 00:07:30.306 0,1 46624/s 85 MiB/s 0 0 00:07:30.306 0,0 46464/s 85 MiB/s 0 0 00:07:30.306 ==================================================================================== 00:07:30.306 Total 93088/s 363 MiB/s 0 0' 00:07:30.306 14:27:12 -- accel/accel.sh@20 -- # IFS=: 00:07:30.306 14:27:12 -- accel/accel.sh@20 -- # read -r var val 00:07:30.306 14:27:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:30.306 14:27:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:30.306 14:27:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.306 14:27:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.306 14:27:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.306 14:27:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.306 14:27:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.306 14:27:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.306 14:27:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.306 14:27:12 -- accel/accel.sh@42 -- # jq -r . 00:07:30.306 [2024-10-01 14:27:12.646939] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:30.306 [2024-10-01 14:27:12.647029] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid698958 ] 00:07:30.306 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.306 [2024-10-01 14:27:12.723833] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.306 [2024-10-01 14:27:12.805461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.562 14:27:12 -- accel/accel.sh@21 -- # val= 00:07:30.562 14:27:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.562 14:27:12 -- accel/accel.sh@20 -- # IFS=: 00:07:30.562 14:27:12 -- accel/accel.sh@20 -- # read -r var val 00:07:30.562 14:27:12 -- accel/accel.sh@21 -- # val= 00:07:30.562 14:27:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.562 14:27:12 -- accel/accel.sh@20 -- # IFS=: 00:07:30.562 14:27:12 -- accel/accel.sh@20 -- # read -r var val 00:07:30.562 14:27:12 -- accel/accel.sh@21 -- # val= 00:07:30.562 14:27:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.562 14:27:12 -- accel/accel.sh@20 -- # IFS=: 00:07:30.562 14:27:12 -- accel/accel.sh@20 -- # read -r var val 00:07:30.562 14:27:12 -- accel/accel.sh@21 -- # val=0x1 00:07:30.563 14:27:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # IFS=: 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # read -r var val 00:07:30.563 14:27:12 -- accel/accel.sh@21 -- # val= 00:07:30.563 14:27:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # IFS=: 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # read -r var val 00:07:30.563 14:27:12 -- accel/accel.sh@21 -- # val= 00:07:30.563 14:27:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # IFS=: 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # read -r var val 00:07:30.563 14:27:12 -- accel/accel.sh@21 -- # val=decompress 00:07:30.563 14:27:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.563 14:27:12 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # IFS=: 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # read -r var val 00:07:30.563 14:27:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:30.563 14:27:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # IFS=: 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # read -r var val 00:07:30.563 14:27:12 -- accel/accel.sh@21 -- # val= 00:07:30.563 14:27:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # IFS=: 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # read -r var val 00:07:30.563 14:27:12 -- accel/accel.sh@21 -- # val=software 00:07:30.563 14:27:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.563 14:27:12 -- accel/accel.sh@23 -- # accel_module=software 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # IFS=: 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # read -r var val 00:07:30.563 14:27:12 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:30.563 14:27:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # IFS=: 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # read -r var val 00:07:30.563 14:27:12 -- accel/accel.sh@21 -- # val=32 00:07:30.563 14:27:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # IFS=: 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # read -r var val 00:07:30.563 14:27:12 -- accel/accel.sh@21 -- # val=32 00:07:30.563 14:27:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # IFS=: 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # read -r var val 00:07:30.563 14:27:12 -- accel/accel.sh@21 -- # val=2 00:07:30.563 14:27:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # IFS=: 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # read -r var val 00:07:30.563 14:27:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:30.563 14:27:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # IFS=: 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # read -r var val 00:07:30.563 14:27:12 -- accel/accel.sh@21 -- # val=Yes 00:07:30.563 14:27:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # IFS=: 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # read -r var val 00:07:30.563 14:27:12 -- accel/accel.sh@21 -- # val= 00:07:30.563 14:27:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # IFS=: 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # read -r var val 00:07:30.563 14:27:12 -- accel/accel.sh@21 -- # val= 00:07:30.563 14:27:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # IFS=: 00:07:30.563 14:27:12 -- accel/accel.sh@20 -- # read -r var val 00:07:31.493 14:27:14 -- accel/accel.sh@21 -- # val= 00:07:31.493 14:27:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.493 14:27:14 -- accel/accel.sh@20 -- # IFS=: 00:07:31.493 14:27:14 -- accel/accel.sh@20 -- # read -r var val 00:07:31.493 14:27:14 -- accel/accel.sh@21 -- # val= 00:07:31.493 14:27:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.493 14:27:14 -- accel/accel.sh@20 -- # IFS=: 00:07:31.493 14:27:14 -- accel/accel.sh@20 -- # read -r var val 00:07:31.493 14:27:14 -- accel/accel.sh@21 -- # val= 00:07:31.493 14:27:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.493 14:27:14 -- accel/accel.sh@20 -- # IFS=: 00:07:31.493 14:27:14 -- accel/accel.sh@20 -- # read -r var val 00:07:31.493 14:27:14 -- accel/accel.sh@21 -- # val= 00:07:31.493 14:27:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.493 14:27:14 -- accel/accel.sh@20 -- # IFS=: 00:07:31.493 14:27:14 -- accel/accel.sh@20 -- # read -r var val 00:07:31.493 14:27:14 -- accel/accel.sh@21 -- # val= 00:07:31.493 14:27:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.493 14:27:14 -- accel/accel.sh@20 -- # IFS=: 00:07:31.493 14:27:14 -- accel/accel.sh@20 -- # read -r var val 00:07:31.493 14:27:14 -- accel/accel.sh@21 -- # val= 00:07:31.493 14:27:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.493 14:27:14 -- accel/accel.sh@20 -- # IFS=: 00:07:31.493 14:27:14 -- accel/accel.sh@20 -- # read -r var val 00:07:31.493 14:27:14 -- accel/accel.sh@21 -- # val= 00:07:31.493 14:27:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.493 14:27:14 -- accel/accel.sh@20 -- # IFS=: 00:07:31.493 14:27:14 -- accel/accel.sh@20 -- # read -r var val 00:07:31.493 14:27:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:31.493 14:27:14 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:31.493 14:27:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.493 00:07:31.493 real 0m2.763s 00:07:31.493 user 0m2.483s 00:07:31.493 sys 0m0.287s 00:07:31.493 14:27:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.493 14:27:14 -- common/autotest_common.sh@10 -- # set +x 00:07:31.493 ************************************ 00:07:31.493 END TEST accel_decomp_mthread 00:07:31.493 ************************************ 00:07:31.751 14:27:14 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:31.751 14:27:14 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:31.751 14:27:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:31.751 14:27:14 -- common/autotest_common.sh@10 -- # set +x 00:07:31.751 ************************************ 00:07:31.751 START TEST accel_deomp_full_mthread 00:07:31.751 ************************************ 00:07:31.751 14:27:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:31.751 14:27:14 -- accel/accel.sh@16 -- # local accel_opc 00:07:31.751 14:27:14 -- accel/accel.sh@17 -- # local accel_module 00:07:31.751 14:27:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:31.751 14:27:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:31.751 14:27:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.751 14:27:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.751 14:27:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.751 14:27:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.751 14:27:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.751 14:27:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.751 14:27:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.751 14:27:14 -- accel/accel.sh@42 -- # jq -r . 00:07:31.751 [2024-10-01 14:27:14.082841] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:31.751 [2024-10-01 14:27:14.082931] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid699157 ] 00:07:31.751 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.751 [2024-10-01 14:27:14.158407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.751 [2024-10-01 14:27:14.239457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.122 14:27:15 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:33.122 00:07:33.122 SPDK Configuration: 00:07:33.122 Core mask: 0x1 00:07:33.122 00:07:33.122 Accel Perf Configuration: 00:07:33.122 Workload Type: decompress 00:07:33.122 Transfer size: 111250 bytes 00:07:33.122 Vector count 1 00:07:33.122 Module: software 00:07:33.122 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:33.122 Queue depth: 32 00:07:33.122 Allocate depth: 32 00:07:33.122 # threads/core: 2 00:07:33.122 Run time: 1 seconds 00:07:33.122 Verify: Yes 00:07:33.122 00:07:33.122 Running for 1 seconds... 00:07:33.122 00:07:33.122 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:33.122 ------------------------------------------------------------------------------------ 00:07:33.122 0,1 2976/s 122 MiB/s 0 0 00:07:33.122 0,0 2944/s 121 MiB/s 0 0 00:07:33.122 ==================================================================================== 00:07:33.122 Total 5920/s 628 MiB/s 0 0' 00:07:33.122 14:27:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:33.122 14:27:15 -- accel/accel.sh@20 -- # IFS=: 00:07:33.122 14:27:15 -- accel/accel.sh@20 -- # read -r var val 00:07:33.122 14:27:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:33.122 14:27:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.122 14:27:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.122 14:27:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.122 14:27:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.122 14:27:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.122 14:27:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.122 14:27:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.122 14:27:15 -- accel/accel.sh@42 -- # jq -r . 00:07:33.122 [2024-10-01 14:27:15.469359] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:33.122 [2024-10-01 14:27:15.469412] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid699370 ] 00:07:33.122 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.122 [2024-10-01 14:27:15.535406] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.122 [2024-10-01 14:27:15.623427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.380 14:27:15 -- accel/accel.sh@21 -- # val= 00:07:33.380 14:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # IFS=: 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # read -r var val 00:07:33.380 14:27:15 -- accel/accel.sh@21 -- # val= 00:07:33.380 14:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # IFS=: 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # read -r var val 00:07:33.380 14:27:15 -- accel/accel.sh@21 -- # val= 00:07:33.380 14:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # IFS=: 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # read -r var val 00:07:33.380 14:27:15 -- accel/accel.sh@21 -- # val=0x1 00:07:33.380 14:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # IFS=: 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # read -r var val 00:07:33.380 14:27:15 -- accel/accel.sh@21 -- # val= 00:07:33.380 14:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # IFS=: 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # read -r var val 00:07:33.380 14:27:15 -- accel/accel.sh@21 -- # val= 00:07:33.380 14:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # IFS=: 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # read -r var val 00:07:33.380 14:27:15 -- accel/accel.sh@21 -- # val=decompress 00:07:33.380 14:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.380 14:27:15 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # IFS=: 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # read -r var val 00:07:33.380 14:27:15 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:33.380 14:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # IFS=: 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # read -r var val 00:07:33.380 14:27:15 -- accel/accel.sh@21 -- # val= 00:07:33.380 14:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # IFS=: 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # read -r var val 00:07:33.380 14:27:15 -- accel/accel.sh@21 -- # val=software 00:07:33.380 14:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.380 14:27:15 -- accel/accel.sh@23 -- # accel_module=software 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # IFS=: 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # read -r var val 00:07:33.380 14:27:15 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:33.380 14:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # IFS=: 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # read -r var val 00:07:33.380 14:27:15 -- accel/accel.sh@21 -- # val=32 00:07:33.380 14:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # IFS=: 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # read -r var val 00:07:33.380 14:27:15 -- accel/accel.sh@21 -- # val=32 00:07:33.380 14:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # IFS=: 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # read -r var val 00:07:33.380 14:27:15 -- accel/accel.sh@21 -- # val=2 00:07:33.380 14:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # IFS=: 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # read -r var val 00:07:33.380 14:27:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:33.380 14:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # IFS=: 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # read -r var val 00:07:33.380 14:27:15 -- accel/accel.sh@21 -- # val=Yes 00:07:33.380 14:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # IFS=: 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # read -r var val 00:07:33.380 14:27:15 -- accel/accel.sh@21 -- # val= 00:07:33.380 14:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # IFS=: 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # read -r var val 00:07:33.380 14:27:15 -- accel/accel.sh@21 -- # val= 00:07:33.380 14:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # IFS=: 00:07:33.380 14:27:15 -- accel/accel.sh@20 -- # read -r var val 00:07:34.752 14:27:16 -- accel/accel.sh@21 -- # val= 00:07:34.752 14:27:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.752 14:27:16 -- accel/accel.sh@20 -- # IFS=: 00:07:34.752 14:27:16 -- accel/accel.sh@20 -- # read -r var val 00:07:34.752 14:27:16 -- accel/accel.sh@21 -- # val= 00:07:34.752 14:27:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.752 14:27:16 -- accel/accel.sh@20 -- # IFS=: 00:07:34.752 14:27:16 -- accel/accel.sh@20 -- # read -r var val 00:07:34.752 14:27:16 -- accel/accel.sh@21 -- # val= 00:07:34.752 14:27:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.752 14:27:16 -- accel/accel.sh@20 -- # IFS=: 00:07:34.752 14:27:16 -- accel/accel.sh@20 -- # read -r var val 00:07:34.752 14:27:16 -- accel/accel.sh@21 -- # val= 00:07:34.752 14:27:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.752 14:27:16 -- accel/accel.sh@20 -- # IFS=: 00:07:34.752 14:27:16 -- accel/accel.sh@20 -- # read -r var val 00:07:34.752 14:27:16 -- accel/accel.sh@21 -- # val= 00:07:34.752 14:27:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.752 14:27:16 -- accel/accel.sh@20 -- # IFS=: 00:07:34.752 14:27:16 -- accel/accel.sh@20 -- # read -r var val 00:07:34.752 14:27:16 -- accel/accel.sh@21 -- # val= 00:07:34.752 14:27:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.752 14:27:16 -- accel/accel.sh@20 -- # IFS=: 00:07:34.752 14:27:16 -- accel/accel.sh@20 -- # read -r var val 00:07:34.752 14:27:16 -- accel/accel.sh@21 -- # val= 00:07:34.752 14:27:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.752 14:27:16 -- accel/accel.sh@20 -- # IFS=: 00:07:34.752 14:27:16 -- accel/accel.sh@20 -- # read -r var val 00:07:34.752 14:27:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:34.752 14:27:16 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:34.752 14:27:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.752 00:07:34.752 real 0m2.789s 00:07:34.752 user 0m2.517s 00:07:34.752 sys 0m0.275s 00:07:34.752 14:27:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.752 14:27:16 -- common/autotest_common.sh@10 -- # set +x 00:07:34.752 ************************************ 00:07:34.752 END TEST accel_deomp_full_mthread 00:07:34.752 ************************************ 00:07:34.752 14:27:16 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:34.752 14:27:16 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:34.752 14:27:16 -- accel/accel.sh@129 -- # build_accel_config 00:07:34.752 14:27:16 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:34.752 14:27:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:34.752 14:27:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.752 14:27:16 -- common/autotest_common.sh@10 -- # set +x 00:07:34.752 14:27:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.752 14:27:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.752 14:27:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.752 14:27:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.752 14:27:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.752 14:27:16 -- accel/accel.sh@42 -- # jq -r . 00:07:34.752 ************************************ 00:07:34.752 START TEST accel_dif_functional_tests 00:07:34.752 ************************************ 00:07:34.752 14:27:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:34.752 [2024-10-01 14:27:16.921525] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:34.752 [2024-10-01 14:27:16.921614] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid699609 ] 00:07:34.752 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.752 [2024-10-01 14:27:16.996385] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:34.752 [2024-10-01 14:27:17.086691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.752 [2024-10-01 14:27:17.086779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.752 [2024-10-01 14:27:17.086782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.752 00:07:34.752 00:07:34.752 CUnit - A unit testing framework for C - Version 2.1-3 00:07:34.752 http://cunit.sourceforge.net/ 00:07:34.752 00:07:34.752 00:07:34.752 Suite: accel_dif 00:07:34.752 Test: verify: DIF generated, GUARD check ...passed 00:07:34.752 Test: verify: DIF generated, APPTAG check ...passed 00:07:34.752 Test: verify: DIF generated, REFTAG check ...passed 00:07:34.752 Test: verify: DIF not generated, GUARD check ...[2024-10-01 14:27:17.165732] dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:34.752 [2024-10-01 14:27:17.165783] dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:34.752 passed 00:07:34.752 Test: verify: DIF not generated, APPTAG check ...[2024-10-01 14:27:17.165834] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:34.752 [2024-10-01 14:27:17.165854] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:34.752 passed 00:07:34.752 Test: verify: DIF not generated, REFTAG check ...[2024-10-01 14:27:17.165875] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:34.752 [2024-10-01 14:27:17.165893] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:34.752 passed 00:07:34.752 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:34.752 Test: verify: APPTAG incorrect, APPTAG check ...[2024-10-01 14:27:17.165938] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:34.752 passed 00:07:34.752 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:34.752 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:34.752 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:34.752 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-10-01 14:27:17.166042] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:34.752 passed 00:07:34.752 Test: generate copy: DIF generated, GUARD check ...passed 00:07:34.752 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:34.752 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:34.752 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:34.752 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:34.752 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:34.752 Test: generate copy: iovecs-len validate ...[2024-10-01 14:27:17.166224] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:34.752 passed 00:07:34.752 Test: generate copy: buffer alignment validate ...passed 00:07:34.752 00:07:34.752 Run Summary: Type Total Ran Passed Failed Inactive 00:07:34.752 suites 1 1 n/a 0 0 00:07:34.752 tests 20 20 20 0 0 00:07:34.752 asserts 204 204 204 0 n/a 00:07:34.752 00:07:34.752 Elapsed time = 0.002 seconds 00:07:35.010 00:07:35.010 real 0m0.452s 00:07:35.010 user 0m0.684s 00:07:35.010 sys 0m0.171s 00:07:35.010 14:27:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.010 14:27:17 -- common/autotest_common.sh@10 -- # set +x 00:07:35.010 ************************************ 00:07:35.010 END TEST accel_dif_functional_tests 00:07:35.010 ************************************ 00:07:35.010 00:07:35.010 real 0m58.934s 00:07:35.010 user 1m6.116s 00:07:35.010 sys 0m7.690s 00:07:35.010 14:27:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.010 14:27:17 -- common/autotest_common.sh@10 -- # set +x 00:07:35.010 ************************************ 00:07:35.010 END TEST accel 00:07:35.010 ************************************ 00:07:35.010 14:27:17 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:35.010 14:27:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:35.010 14:27:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:35.010 14:27:17 -- common/autotest_common.sh@10 -- # set +x 00:07:35.010 ************************************ 00:07:35.010 START TEST accel_rpc 00:07:35.010 ************************************ 00:07:35.010 14:27:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:35.010 * Looking for test storage... 00:07:35.010 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:07:35.010 14:27:17 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:35.010 14:27:17 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=699764 00:07:35.010 14:27:17 -- accel/accel_rpc.sh@15 -- # waitforlisten 699764 00:07:35.010 14:27:17 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:35.010 14:27:17 -- common/autotest_common.sh@819 -- # '[' -z 699764 ']' 00:07:35.010 14:27:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.010 14:27:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:35.010 14:27:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.011 14:27:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:35.011 14:27:17 -- common/autotest_common.sh@10 -- # set +x 00:07:35.268 [2024-10-01 14:27:17.543406] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:35.268 [2024-10-01 14:27:17.543482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid699764 ] 00:07:35.268 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.268 [2024-10-01 14:27:17.626043] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.268 [2024-10-01 14:27:17.719466] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:35.268 [2024-10-01 14:27:17.719585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.201 14:27:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:36.201 14:27:18 -- common/autotest_common.sh@852 -- # return 0 00:07:36.201 14:27:18 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:36.201 14:27:18 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:36.201 14:27:18 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:36.201 14:27:18 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:36.201 14:27:18 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:36.201 14:27:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:36.201 14:27:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.201 14:27:18 -- common/autotest_common.sh@10 -- # set +x 00:07:36.201 ************************************ 00:07:36.201 START TEST accel_assign_opcode 00:07:36.201 ************************************ 00:07:36.201 14:27:18 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:36.201 14:27:18 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:36.201 14:27:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.201 14:27:18 -- common/autotest_common.sh@10 -- # set +x 00:07:36.201 [2024-10-01 14:27:18.401643] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:36.201 14:27:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.201 14:27:18 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:36.201 14:27:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.201 14:27:18 -- common/autotest_common.sh@10 -- # set +x 00:07:36.201 [2024-10-01 14:27:18.409672] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:36.201 14:27:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.201 14:27:18 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:36.201 14:27:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.201 14:27:18 -- common/autotest_common.sh@10 -- # set +x 00:07:36.201 14:27:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.201 14:27:18 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:36.201 14:27:18 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:36.201 14:27:18 -- accel/accel_rpc.sh@42 -- # grep software 00:07:36.201 14:27:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.201 14:27:18 -- common/autotest_common.sh@10 -- # set +x 00:07:36.201 14:27:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.201 software 00:07:36.201 00:07:36.201 real 0m0.231s 00:07:36.201 user 0m0.024s 00:07:36.201 sys 0m0.011s 00:07:36.201 14:27:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.201 14:27:18 -- common/autotest_common.sh@10 -- # set +x 00:07:36.201 ************************************ 00:07:36.201 END TEST accel_assign_opcode 00:07:36.201 ************************************ 00:07:36.201 14:27:18 -- accel/accel_rpc.sh@55 -- # killprocess 699764 00:07:36.201 14:27:18 -- common/autotest_common.sh@926 -- # '[' -z 699764 ']' 00:07:36.201 14:27:18 -- common/autotest_common.sh@930 -- # kill -0 699764 00:07:36.201 14:27:18 -- common/autotest_common.sh@931 -- # uname 00:07:36.201 14:27:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:36.201 14:27:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 699764 00:07:36.459 14:27:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:36.459 14:27:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:36.459 14:27:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 699764' 00:07:36.459 killing process with pid 699764 00:07:36.459 14:27:18 -- common/autotest_common.sh@945 -- # kill 699764 00:07:36.459 14:27:18 -- common/autotest_common.sh@950 -- # wait 699764 00:07:36.716 00:07:36.716 real 0m1.629s 00:07:36.716 user 0m1.649s 00:07:36.716 sys 0m0.485s 00:07:36.716 14:27:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.716 14:27:19 -- common/autotest_common.sh@10 -- # set +x 00:07:36.716 ************************************ 00:07:36.716 END TEST accel_rpc 00:07:36.716 ************************************ 00:07:36.716 14:27:19 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:36.716 14:27:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:36.716 14:27:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.716 14:27:19 -- common/autotest_common.sh@10 -- # set +x 00:07:36.716 ************************************ 00:07:36.716 START TEST app_cmdline 00:07:36.716 ************************************ 00:07:36.716 14:27:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:36.716 * Looking for test storage... 00:07:36.716 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:36.716 14:27:19 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:36.716 14:27:19 -- app/cmdline.sh@17 -- # spdk_tgt_pid=700014 00:07:36.716 14:27:19 -- app/cmdline.sh@18 -- # waitforlisten 700014 00:07:36.716 14:27:19 -- common/autotest_common.sh@819 -- # '[' -z 700014 ']' 00:07:36.716 14:27:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.716 14:27:19 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:36.716 14:27:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:36.716 14:27:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.716 14:27:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:36.716 14:27:19 -- common/autotest_common.sh@10 -- # set +x 00:07:36.716 [2024-10-01 14:27:19.225760] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:36.716 [2024-10-01 14:27:19.225843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid700014 ] 00:07:36.973 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.973 [2024-10-01 14:27:19.297487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.973 [2024-10-01 14:27:19.383619] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:36.973 [2024-10-01 14:27:19.383738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.908 14:27:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:37.908 14:27:20 -- common/autotest_common.sh@852 -- # return 0 00:07:37.908 14:27:20 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:37.908 { 00:07:37.908 "version": "SPDK v24.01.1-pre git sha1 726a04d70", 00:07:37.908 "fields": { 00:07:37.908 "major": 24, 00:07:37.908 "minor": 1, 00:07:37.908 "patch": 1, 00:07:37.908 "suffix": "-pre", 00:07:37.908 "commit": "726a04d70" 00:07:37.908 } 00:07:37.908 } 00:07:37.909 14:27:20 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:37.909 14:27:20 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:37.909 14:27:20 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:37.909 14:27:20 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:37.909 14:27:20 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:37.909 14:27:20 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:37.909 14:27:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:37.909 14:27:20 -- app/cmdline.sh@26 -- # sort 00:07:37.909 14:27:20 -- common/autotest_common.sh@10 -- # set +x 00:07:37.909 14:27:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:37.909 14:27:20 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:37.909 14:27:20 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:37.909 14:27:20 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:37.909 14:27:20 -- common/autotest_common.sh@640 -- # local es=0 00:07:37.909 14:27:20 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:37.909 14:27:20 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:37.909 14:27:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:37.909 14:27:20 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:37.909 14:27:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:37.909 14:27:20 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:37.909 14:27:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:37.909 14:27:20 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:37.909 14:27:20 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:07:37.909 14:27:20 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.167 request: 00:07:38.168 { 00:07:38.168 "method": "env_dpdk_get_mem_stats", 00:07:38.168 "req_id": 1 00:07:38.168 } 00:07:38.168 Got JSON-RPC error response 00:07:38.168 response: 00:07:38.168 { 00:07:38.168 "code": -32601, 00:07:38.168 "message": "Method not found" 00:07:38.168 } 00:07:38.168 14:27:20 -- common/autotest_common.sh@643 -- # es=1 00:07:38.168 14:27:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:38.168 14:27:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:38.168 14:27:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:38.168 14:27:20 -- app/cmdline.sh@1 -- # killprocess 700014 00:07:38.168 14:27:20 -- common/autotest_common.sh@926 -- # '[' -z 700014 ']' 00:07:38.168 14:27:20 -- common/autotest_common.sh@930 -- # kill -0 700014 00:07:38.168 14:27:20 -- common/autotest_common.sh@931 -- # uname 00:07:38.168 14:27:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:38.168 14:27:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 700014 00:07:38.168 14:27:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:38.168 14:27:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:38.168 14:27:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 700014' 00:07:38.168 killing process with pid 700014 00:07:38.168 14:27:20 -- common/autotest_common.sh@945 -- # kill 700014 00:07:38.168 14:27:20 -- common/autotest_common.sh@950 -- # wait 700014 00:07:38.426 00:07:38.426 real 0m1.751s 00:07:38.426 user 0m2.073s 00:07:38.426 sys 0m0.480s 00:07:38.426 14:27:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.426 14:27:20 -- common/autotest_common.sh@10 -- # set +x 00:07:38.426 ************************************ 00:07:38.426 END TEST app_cmdline 00:07:38.426 ************************************ 00:07:38.426 14:27:20 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:38.426 14:27:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:38.426 14:27:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:38.426 14:27:20 -- common/autotest_common.sh@10 -- # set +x 00:07:38.426 ************************************ 00:07:38.426 START TEST version 00:07:38.426 ************************************ 00:07:38.426 14:27:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:38.685 * Looking for test storage... 00:07:38.685 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:38.685 14:27:21 -- app/version.sh@17 -- # get_header_version major 00:07:38.685 14:27:21 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:38.685 14:27:21 -- app/version.sh@14 -- # cut -f2 00:07:38.685 14:27:21 -- app/version.sh@14 -- # tr -d '"' 00:07:38.685 14:27:21 -- app/version.sh@17 -- # major=24 00:07:38.685 14:27:21 -- app/version.sh@18 -- # get_header_version minor 00:07:38.685 14:27:21 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:38.685 14:27:21 -- app/version.sh@14 -- # cut -f2 00:07:38.685 14:27:21 -- app/version.sh@14 -- # tr -d '"' 00:07:38.685 14:27:21 -- app/version.sh@18 -- # minor=1 00:07:38.685 14:27:21 -- app/version.sh@19 -- # get_header_version patch 00:07:38.685 14:27:21 -- app/version.sh@14 -- # cut -f2 00:07:38.685 14:27:21 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:38.685 14:27:21 -- app/version.sh@14 -- # tr -d '"' 00:07:38.685 14:27:21 -- app/version.sh@19 -- # patch=1 00:07:38.685 14:27:21 -- app/version.sh@20 -- # get_header_version suffix 00:07:38.685 14:27:21 -- app/version.sh@14 -- # cut -f2 00:07:38.685 14:27:21 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:38.685 14:27:21 -- app/version.sh@14 -- # tr -d '"' 00:07:38.685 14:27:21 -- app/version.sh@20 -- # suffix=-pre 00:07:38.685 14:27:21 -- app/version.sh@22 -- # version=24.1 00:07:38.685 14:27:21 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:38.685 14:27:21 -- app/version.sh@25 -- # version=24.1.1 00:07:38.685 14:27:21 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:38.685 14:27:21 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:38.685 14:27:21 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:38.685 14:27:21 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:38.685 14:27:21 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:38.685 00:07:38.685 real 0m0.178s 00:07:38.685 user 0m0.091s 00:07:38.685 sys 0m0.130s 00:07:38.685 14:27:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.685 14:27:21 -- common/autotest_common.sh@10 -- # set +x 00:07:38.685 ************************************ 00:07:38.685 END TEST version 00:07:38.685 ************************************ 00:07:38.685 14:27:21 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:38.685 14:27:21 -- spdk/autotest.sh@204 -- # uname -s 00:07:38.685 14:27:21 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:38.685 14:27:21 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:38.685 14:27:21 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:38.685 14:27:21 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:07:38.685 14:27:21 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:07:38.685 14:27:21 -- spdk/autotest.sh@268 -- # timing_exit lib 00:07:38.685 14:27:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:38.685 14:27:21 -- common/autotest_common.sh@10 -- # set +x 00:07:38.685 14:27:21 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:38.685 14:27:21 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:07:38.685 14:27:21 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:07:38.685 14:27:21 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:07:38.685 14:27:21 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:07:38.685 14:27:21 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:07:38.685 14:27:21 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:07:38.685 14:27:21 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:07:38.685 14:27:21 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:07:38.685 14:27:21 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:07:38.685 14:27:21 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:07:38.685 14:27:21 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:07:38.685 14:27:21 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:07:38.685 14:27:21 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:07:38.685 14:27:21 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:07:38.685 14:27:21 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:07:38.685 14:27:21 -- spdk/autotest.sh@374 -- # [[ 1 -eq 1 ]] 00:07:38.685 14:27:21 -- spdk/autotest.sh@375 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:38.685 14:27:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:38.685 14:27:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:38.685 14:27:21 -- common/autotest_common.sh@10 -- # set +x 00:07:38.685 ************************************ 00:07:38.685 START TEST llvm_fuzz 00:07:38.685 ************************************ 00:07:38.685 14:27:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:38.946 * Looking for test storage... 00:07:38.946 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:07:38.946 14:27:21 -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:07:38.946 14:27:21 -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:07:38.946 14:27:21 -- common/autotest_common.sh@538 -- # fuzzers=() 00:07:38.946 14:27:21 -- common/autotest_common.sh@538 -- # local fuzzers 00:07:38.946 14:27:21 -- common/autotest_common.sh@540 -- # [[ -n '' ]] 00:07:38.946 14:27:21 -- common/autotest_common.sh@543 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:07:38.946 14:27:21 -- common/autotest_common.sh@544 -- # fuzzers=("${fuzzers[@]##*/}") 00:07:38.946 14:27:21 -- common/autotest_common.sh@547 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:07:38.946 14:27:21 -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:07:38.946 14:27:21 -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:07:38.946 14:27:21 -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:07:38.946 14:27:21 -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:38.946 14:27:21 -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:38.946 14:27:21 -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:38.946 14:27:21 -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:38.946 14:27:21 -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:38.946 14:27:21 -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:38.946 14:27:21 -- fuzz/llvm.sh@62 -- # run_test nvmf_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:38.946 14:27:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:38.946 14:27:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:38.946 14:27:21 -- common/autotest_common.sh@10 -- # set +x 00:07:38.946 ************************************ 00:07:38.946 START TEST nvmf_fuzz 00:07:38.946 ************************************ 00:07:38.946 14:27:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:38.946 * Looking for test storage... 00:07:38.946 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:38.946 14:27:21 -- nvmf/run.sh@52 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:38.946 14:27:21 -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:38.946 14:27:21 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:38.946 14:27:21 -- common/autotest_common.sh@34 -- # set -e 00:07:38.946 14:27:21 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:38.946 14:27:21 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:38.946 14:27:21 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:38.946 14:27:21 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:38.946 14:27:21 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:38.946 14:27:21 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:38.946 14:27:21 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:38.946 14:27:21 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:38.946 14:27:21 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:38.946 14:27:21 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:38.946 14:27:21 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:38.946 14:27:21 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:38.946 14:27:21 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:38.946 14:27:21 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:38.946 14:27:21 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:38.946 14:27:21 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:38.946 14:27:21 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:38.946 14:27:21 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:38.946 14:27:21 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:38.946 14:27:21 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:38.946 14:27:21 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:38.946 14:27:21 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:38.946 14:27:21 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:38.946 14:27:21 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:38.946 14:27:21 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:38.946 14:27:21 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:38.946 14:27:21 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:38.946 14:27:21 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:38.946 14:27:21 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:38.946 14:27:21 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:38.946 14:27:21 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:38.946 14:27:21 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:38.946 14:27:21 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:38.946 14:27:21 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:38.946 14:27:21 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:38.946 14:27:21 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:38.946 14:27:21 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:38.946 14:27:21 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:38.946 14:27:21 -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:07:38.946 14:27:21 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:38.946 14:27:21 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:38.946 14:27:21 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:38.946 14:27:21 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:38.946 14:27:21 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:38.946 14:27:21 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:38.946 14:27:21 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:38.946 14:27:21 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:38.946 14:27:21 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:38.946 14:27:21 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:38.946 14:27:21 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:38.946 14:27:21 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:38.946 14:27:21 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:38.946 14:27:21 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:38.946 14:27:21 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:38.946 14:27:21 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:07:38.946 14:27:21 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:38.946 14:27:21 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:38.946 14:27:21 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:38.946 14:27:21 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:38.946 14:27:21 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:38.946 14:27:21 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:38.946 14:27:21 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:38.946 14:27:21 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:38.946 14:27:21 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:38.946 14:27:21 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:07:38.946 14:27:21 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:38.946 14:27:21 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:38.946 14:27:21 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:07:38.946 14:27:21 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:38.946 14:27:21 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:38.946 14:27:21 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:38.946 14:27:21 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:38.946 14:27:21 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:38.946 14:27:21 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:38.946 14:27:21 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:38.946 14:27:21 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:38.946 14:27:21 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:38.946 14:27:21 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:38.946 14:27:21 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:38.947 14:27:21 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:38.947 14:27:21 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:38.947 14:27:21 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:38.947 14:27:21 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:38.947 14:27:21 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:38.947 14:27:21 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:38.947 14:27:21 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:38.947 14:27:21 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:38.947 14:27:21 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:38.947 14:27:21 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:38.947 14:27:21 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:38.947 14:27:21 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:38.947 14:27:21 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:38.947 14:27:21 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:38.947 14:27:21 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:38.947 14:27:21 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:38.947 14:27:21 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:38.947 14:27:21 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:38.947 14:27:21 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:38.947 14:27:21 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:38.947 #define SPDK_CONFIG_H 00:07:38.947 #define SPDK_CONFIG_APPS 1 00:07:38.947 #define SPDK_CONFIG_ARCH native 00:07:38.947 #undef SPDK_CONFIG_ASAN 00:07:38.947 #undef SPDK_CONFIG_AVAHI 00:07:38.947 #undef SPDK_CONFIG_CET 00:07:38.947 #define SPDK_CONFIG_COVERAGE 1 00:07:38.947 #define SPDK_CONFIG_CROSS_PREFIX 00:07:38.947 #undef SPDK_CONFIG_CRYPTO 00:07:38.947 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:38.947 #undef SPDK_CONFIG_CUSTOMOCF 00:07:38.947 #undef SPDK_CONFIG_DAOS 00:07:38.947 #define SPDK_CONFIG_DAOS_DIR 00:07:38.947 #define SPDK_CONFIG_DEBUG 1 00:07:38.947 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:38.947 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:38.947 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:38.947 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:38.947 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:38.947 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:38.947 #define SPDK_CONFIG_EXAMPLES 1 00:07:38.947 #undef SPDK_CONFIG_FC 00:07:38.947 #define SPDK_CONFIG_FC_PATH 00:07:38.947 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:38.947 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:38.947 #undef SPDK_CONFIG_FUSE 00:07:38.947 #define SPDK_CONFIG_FUZZER 1 00:07:38.947 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:38.947 #undef SPDK_CONFIG_GOLANG 00:07:38.947 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:38.947 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:38.947 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:38.947 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:38.947 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:38.947 #define SPDK_CONFIG_IDXD 1 00:07:38.947 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:38.947 #undef SPDK_CONFIG_IPSEC_MB 00:07:38.947 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:38.947 #define SPDK_CONFIG_ISAL 1 00:07:38.947 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:38.947 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:38.947 #define SPDK_CONFIG_LIBDIR 00:07:38.947 #undef SPDK_CONFIG_LTO 00:07:38.947 #define SPDK_CONFIG_MAX_LCORES 00:07:38.947 #define SPDK_CONFIG_NVME_CUSE 1 00:07:38.947 #undef SPDK_CONFIG_OCF 00:07:38.947 #define SPDK_CONFIG_OCF_PATH 00:07:38.947 #define SPDK_CONFIG_OPENSSL_PATH 00:07:38.947 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:38.947 #undef SPDK_CONFIG_PGO_USE 00:07:38.947 #define SPDK_CONFIG_PREFIX /usr/local 00:07:38.947 #undef SPDK_CONFIG_RAID5F 00:07:38.947 #undef SPDK_CONFIG_RBD 00:07:38.947 #define SPDK_CONFIG_RDMA 1 00:07:38.947 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:38.947 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:38.947 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:38.947 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:38.947 #undef SPDK_CONFIG_SHARED 00:07:38.947 #undef SPDK_CONFIG_SMA 00:07:38.947 #define SPDK_CONFIG_TESTS 1 00:07:38.947 #undef SPDK_CONFIG_TSAN 00:07:38.947 #define SPDK_CONFIG_UBLK 1 00:07:38.947 #define SPDK_CONFIG_UBSAN 1 00:07:38.947 #undef SPDK_CONFIG_UNIT_TESTS 00:07:38.947 #undef SPDK_CONFIG_URING 00:07:38.947 #define SPDK_CONFIG_URING_PATH 00:07:38.947 #undef SPDK_CONFIG_URING_ZNS 00:07:38.947 #undef SPDK_CONFIG_USDT 00:07:38.947 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:38.947 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:38.947 #define SPDK_CONFIG_VFIO_USER 1 00:07:38.947 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:38.947 #define SPDK_CONFIG_VHOST 1 00:07:38.947 #define SPDK_CONFIG_VIRTIO 1 00:07:38.947 #undef SPDK_CONFIG_VTUNE 00:07:38.947 #define SPDK_CONFIG_VTUNE_DIR 00:07:38.947 #define SPDK_CONFIG_WERROR 1 00:07:38.947 #define SPDK_CONFIG_WPDK_DIR 00:07:38.947 #undef SPDK_CONFIG_XNVME 00:07:38.947 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:38.947 14:27:21 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:38.947 14:27:21 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:38.947 14:27:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.947 14:27:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.947 14:27:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.947 14:27:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.947 14:27:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.947 14:27:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.947 14:27:21 -- paths/export.sh@5 -- # export PATH 00:07:38.947 14:27:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.947 14:27:21 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:38.947 14:27:21 -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:38.947 14:27:21 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:38.947 14:27:21 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:38.947 14:27:21 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:38.947 14:27:21 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:38.947 14:27:21 -- pm/common@16 -- # TEST_TAG=N/A 00:07:38.947 14:27:21 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:38.947 14:27:21 -- common/autotest_common.sh@52 -- # : 1 00:07:38.947 14:27:21 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:38.947 14:27:21 -- common/autotest_common.sh@56 -- # : 0 00:07:38.947 14:27:21 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:38.947 14:27:21 -- common/autotest_common.sh@58 -- # : 0 00:07:38.947 14:27:21 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:38.947 14:27:21 -- common/autotest_common.sh@60 -- # : 1 00:07:38.947 14:27:21 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:38.947 14:27:21 -- common/autotest_common.sh@62 -- # : 0 00:07:38.947 14:27:21 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:38.947 14:27:21 -- common/autotest_common.sh@64 -- # : 00:07:38.947 14:27:21 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:38.947 14:27:21 -- common/autotest_common.sh@66 -- # : 0 00:07:38.947 14:27:21 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:38.947 14:27:21 -- common/autotest_common.sh@68 -- # : 0 00:07:38.947 14:27:21 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:38.947 14:27:21 -- common/autotest_common.sh@70 -- # : 0 00:07:38.947 14:27:21 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:38.947 14:27:21 -- common/autotest_common.sh@72 -- # : 0 00:07:38.947 14:27:21 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:38.947 14:27:21 -- common/autotest_common.sh@74 -- # : 0 00:07:38.947 14:27:21 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:38.947 14:27:21 -- common/autotest_common.sh@76 -- # : 0 00:07:38.947 14:27:21 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:38.947 14:27:21 -- common/autotest_common.sh@78 -- # : 0 00:07:38.947 14:27:21 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:38.947 14:27:21 -- common/autotest_common.sh@80 -- # : 0 00:07:38.947 14:27:21 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:38.947 14:27:21 -- common/autotest_common.sh@82 -- # : 0 00:07:38.947 14:27:21 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:38.947 14:27:21 -- common/autotest_common.sh@84 -- # : 0 00:07:38.947 14:27:21 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:38.948 14:27:21 -- common/autotest_common.sh@86 -- # : 0 00:07:38.948 14:27:21 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:38.948 14:27:21 -- common/autotest_common.sh@88 -- # : 0 00:07:38.948 14:27:21 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:38.948 14:27:21 -- common/autotest_common.sh@90 -- # : 0 00:07:38.948 14:27:21 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:38.948 14:27:21 -- common/autotest_common.sh@92 -- # : 1 00:07:38.948 14:27:21 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:38.948 14:27:21 -- common/autotest_common.sh@94 -- # : 1 00:07:38.948 14:27:21 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:38.948 14:27:21 -- common/autotest_common.sh@96 -- # : rdma 00:07:38.948 14:27:21 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:38.948 14:27:21 -- common/autotest_common.sh@98 -- # : 0 00:07:38.948 14:27:21 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:38.948 14:27:21 -- common/autotest_common.sh@100 -- # : 0 00:07:38.948 14:27:21 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:38.948 14:27:21 -- common/autotest_common.sh@102 -- # : 0 00:07:38.948 14:27:21 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:38.948 14:27:21 -- common/autotest_common.sh@104 -- # : 0 00:07:38.948 14:27:21 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:38.948 14:27:21 -- common/autotest_common.sh@106 -- # : 0 00:07:38.948 14:27:21 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:38.948 14:27:21 -- common/autotest_common.sh@108 -- # : 0 00:07:38.948 14:27:21 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:38.948 14:27:21 -- common/autotest_common.sh@110 -- # : 0 00:07:38.948 14:27:21 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:38.948 14:27:21 -- common/autotest_common.sh@112 -- # : 0 00:07:38.948 14:27:21 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:38.948 14:27:21 -- common/autotest_common.sh@114 -- # : 0 00:07:38.948 14:27:21 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:38.948 14:27:21 -- common/autotest_common.sh@116 -- # : 1 00:07:38.948 14:27:21 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:38.948 14:27:21 -- common/autotest_common.sh@118 -- # : 00:07:38.948 14:27:21 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:38.948 14:27:21 -- common/autotest_common.sh@120 -- # : 0 00:07:38.948 14:27:21 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:38.948 14:27:21 -- common/autotest_common.sh@122 -- # : 0 00:07:38.948 14:27:21 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:38.948 14:27:21 -- common/autotest_common.sh@124 -- # : 0 00:07:38.948 14:27:21 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:38.948 14:27:21 -- common/autotest_common.sh@126 -- # : 0 00:07:38.948 14:27:21 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:38.948 14:27:21 -- common/autotest_common.sh@128 -- # : 0 00:07:38.948 14:27:21 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:38.948 14:27:21 -- common/autotest_common.sh@130 -- # : 0 00:07:38.948 14:27:21 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:38.948 14:27:21 -- common/autotest_common.sh@132 -- # : 00:07:38.948 14:27:21 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:38.948 14:27:21 -- common/autotest_common.sh@134 -- # : true 00:07:39.208 14:27:21 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:39.208 14:27:21 -- common/autotest_common.sh@136 -- # : 0 00:07:39.208 14:27:21 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:39.208 14:27:21 -- common/autotest_common.sh@138 -- # : 0 00:07:39.208 14:27:21 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:39.208 14:27:21 -- common/autotest_common.sh@140 -- # : 0 00:07:39.208 14:27:21 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:39.208 14:27:21 -- common/autotest_common.sh@142 -- # : 0 00:07:39.208 14:27:21 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:39.208 14:27:21 -- common/autotest_common.sh@144 -- # : 0 00:07:39.208 14:27:21 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:39.208 14:27:21 -- common/autotest_common.sh@146 -- # : 0 00:07:39.208 14:27:21 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:39.208 14:27:21 -- common/autotest_common.sh@148 -- # : 00:07:39.208 14:27:21 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:39.208 14:27:21 -- common/autotest_common.sh@150 -- # : 0 00:07:39.208 14:27:21 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:39.208 14:27:21 -- common/autotest_common.sh@152 -- # : 0 00:07:39.208 14:27:21 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:39.208 14:27:21 -- common/autotest_common.sh@154 -- # : 0 00:07:39.208 14:27:21 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:39.208 14:27:21 -- common/autotest_common.sh@156 -- # : 0 00:07:39.208 14:27:21 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:39.208 14:27:21 -- common/autotest_common.sh@158 -- # : 0 00:07:39.208 14:27:21 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:39.209 14:27:21 -- common/autotest_common.sh@160 -- # : 0 00:07:39.209 14:27:21 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:39.209 14:27:21 -- common/autotest_common.sh@163 -- # : 00:07:39.209 14:27:21 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:39.209 14:27:21 -- common/autotest_common.sh@165 -- # : 0 00:07:39.209 14:27:21 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:39.209 14:27:21 -- common/autotest_common.sh@167 -- # : 0 00:07:39.209 14:27:21 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:39.209 14:27:21 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:39.209 14:27:21 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:39.209 14:27:21 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:39.209 14:27:21 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:39.209 14:27:21 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:39.209 14:27:21 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:39.209 14:27:21 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:39.209 14:27:21 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:39.209 14:27:21 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:39.209 14:27:21 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:39.209 14:27:21 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:39.209 14:27:21 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:39.209 14:27:21 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:39.209 14:27:21 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:39.209 14:27:21 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:39.209 14:27:21 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:39.209 14:27:21 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:39.209 14:27:21 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:39.209 14:27:21 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:39.209 14:27:21 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:39.209 14:27:21 -- common/autotest_common.sh@196 -- # cat 00:07:39.209 14:27:21 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:39.209 14:27:21 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:39.209 14:27:21 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:39.209 14:27:21 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:39.209 14:27:21 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:39.209 14:27:21 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:39.209 14:27:21 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:39.209 14:27:21 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:39.209 14:27:21 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:39.209 14:27:21 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:39.209 14:27:21 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:39.209 14:27:21 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:39.209 14:27:21 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:39.209 14:27:21 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:39.209 14:27:21 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:39.209 14:27:21 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:39.209 14:27:21 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:39.209 14:27:21 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:39.209 14:27:21 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:39.209 14:27:21 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:07:39.209 14:27:21 -- common/autotest_common.sh@249 -- # export valgrind= 00:07:39.209 14:27:21 -- common/autotest_common.sh@249 -- # valgrind= 00:07:39.209 14:27:21 -- common/autotest_common.sh@255 -- # uname -s 00:07:39.209 14:27:21 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:07:39.209 14:27:21 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:07:39.209 14:27:21 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:07:39.209 14:27:21 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:07:39.209 14:27:21 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:39.209 14:27:21 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:39.209 14:27:21 -- common/autotest_common.sh@265 -- # MAKE=make 00:07:39.209 14:27:21 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j72 00:07:39.209 14:27:21 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:07:39.209 14:27:21 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:07:39.209 14:27:21 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:39.209 14:27:21 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:07:39.209 14:27:21 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:07:39.209 14:27:21 -- common/autotest_common.sh@309 -- # [[ -z 700516 ]] 00:07:39.209 14:27:21 -- common/autotest_common.sh@309 -- # kill -0 700516 00:07:39.209 14:27:21 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:07:39.209 14:27:21 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:07:39.209 14:27:21 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:07:39.209 14:27:21 -- common/autotest_common.sh@322 -- # local mount target_dir 00:07:39.209 14:27:21 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:07:39.209 14:27:21 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:07:39.209 14:27:21 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:07:39.209 14:27:21 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:07:39.209 14:27:21 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.Ey8hIc 00:07:39.209 14:27:21 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:39.209 14:27:21 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:07:39.209 14:27:21 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:07:39.209 14:27:21 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.Ey8hIc/tests/nvmf /tmp/spdk.Ey8hIc 00:07:39.209 14:27:21 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:07:39.209 14:27:21 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:39.209 14:27:21 -- common/autotest_common.sh@318 -- # df -T 00:07:39.209 14:27:21 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:07:39.209 14:27:21 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:07:39.209 14:27:21 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:07:39.209 14:27:21 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:07:39.209 14:27:21 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:07:39.209 14:27:21 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:07:39.209 14:27:21 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:39.209 14:27:21 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:07:39.209 14:27:21 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:07:39.209 14:27:21 -- common/autotest_common.sh@353 -- # avails["$mount"]=722997248 00:07:39.209 14:27:21 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:07:39.209 14:27:21 -- common/autotest_common.sh@354 -- # uses["$mount"]=4561432576 00:07:39.209 14:27:21 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:39.209 14:27:21 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:07:39.209 14:27:21 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:07:39.209 14:27:21 -- common/autotest_common.sh@353 -- # avails["$mount"]=87174807552 00:07:39.209 14:27:21 -- common/autotest_common.sh@353 -- # sizes["$mount"]=94500270080 00:07:39.209 14:27:21 -- common/autotest_common.sh@354 -- # uses["$mount"]=7325462528 00:07:39.209 14:27:21 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:39.209 14:27:21 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:39.209 14:27:21 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:39.209 14:27:21 -- common/autotest_common.sh@353 -- # avails["$mount"]=47248875520 00:07:39.210 14:27:21 -- common/autotest_common.sh@353 -- # sizes["$mount"]=47250132992 00:07:39.210 14:27:21 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:07:39.210 14:27:21 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:39.210 14:27:21 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:39.210 14:27:21 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:39.210 14:27:21 -- common/autotest_common.sh@353 -- # avails["$mount"]=18894151680 00:07:39.210 14:27:21 -- common/autotest_common.sh@353 -- # sizes["$mount"]=18900054016 00:07:39.210 14:27:21 -- common/autotest_common.sh@354 -- # uses["$mount"]=5902336 00:07:39.210 14:27:21 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:39.210 14:27:21 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:39.210 14:27:21 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:39.210 14:27:21 -- common/autotest_common.sh@353 -- # avails["$mount"]=47249698816 00:07:39.210 14:27:21 -- common/autotest_common.sh@353 -- # sizes["$mount"]=47250137088 00:07:39.210 14:27:21 -- common/autotest_common.sh@354 -- # uses["$mount"]=438272 00:07:39.210 14:27:21 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:39.210 14:27:21 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:39.210 14:27:21 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:39.210 14:27:21 -- common/autotest_common.sh@353 -- # avails["$mount"]=9450012672 00:07:39.210 14:27:21 -- common/autotest_common.sh@353 -- # sizes["$mount"]=9450024960 00:07:39.210 14:27:21 -- common/autotest_common.sh@354 -- # uses["$mount"]=12288 00:07:39.210 14:27:21 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:39.210 14:27:21 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:07:39.210 * Looking for test storage... 00:07:39.210 14:27:21 -- common/autotest_common.sh@359 -- # local target_space new_size 00:07:39.210 14:27:21 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:07:39.210 14:27:21 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:39.210 14:27:21 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:39.210 14:27:21 -- common/autotest_common.sh@363 -- # mount=/ 00:07:39.210 14:27:21 -- common/autotest_common.sh@365 -- # target_space=87174807552 00:07:39.210 14:27:21 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:07:39.210 14:27:21 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:07:39.210 14:27:21 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:07:39.210 14:27:21 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:07:39.210 14:27:21 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:07:39.210 14:27:21 -- common/autotest_common.sh@372 -- # new_size=9540055040 00:07:39.210 14:27:21 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:39.210 14:27:21 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:39.210 14:27:21 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:39.210 14:27:21 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:39.210 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:39.210 14:27:21 -- common/autotest_common.sh@380 -- # return 0 00:07:39.210 14:27:21 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:07:39.210 14:27:21 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:07:39.210 14:27:21 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:39.210 14:27:21 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:39.210 14:27:21 -- common/autotest_common.sh@1672 -- # true 00:07:39.210 14:27:21 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:07:39.210 14:27:21 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:39.210 14:27:21 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:39.210 14:27:21 -- common/autotest_common.sh@27 -- # exec 00:07:39.210 14:27:21 -- common/autotest_common.sh@29 -- # exec 00:07:39.210 14:27:21 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:39.210 14:27:21 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:39.210 14:27:21 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:39.210 14:27:21 -- common/autotest_common.sh@18 -- # set -x 00:07:39.210 14:27:21 -- nvmf/run.sh@53 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:07:39.210 14:27:21 -- ../common.sh@8 -- # pids=() 00:07:39.210 14:27:21 -- nvmf/run.sh@55 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:39.210 14:27:21 -- nvmf/run.sh@56 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:39.210 14:27:21 -- nvmf/run.sh@56 -- # fuzz_num=25 00:07:39.210 14:27:21 -- nvmf/run.sh@57 -- # (( fuzz_num != 0 )) 00:07:39.210 14:27:21 -- nvmf/run.sh@59 -- # trap 'cleanup /tmp/llvm_fuzz*; exit 1' SIGINT SIGTERM EXIT 00:07:39.210 14:27:21 -- nvmf/run.sh@61 -- # mem_size=512 00:07:39.210 14:27:21 -- nvmf/run.sh@62 -- # [[ 1 -eq 1 ]] 00:07:39.210 14:27:21 -- nvmf/run.sh@63 -- # start_llvm_fuzz_short 25 1 00:07:39.210 14:27:21 -- ../common.sh@69 -- # local fuzz_num=25 00:07:39.210 14:27:21 -- ../common.sh@70 -- # local time=1 00:07:39.210 14:27:21 -- ../common.sh@72 -- # (( i = 0 )) 00:07:39.210 14:27:21 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:39.210 14:27:21 -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:39.210 14:27:21 -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:07:39.210 14:27:21 -- nvmf/run.sh@24 -- # local timen=1 00:07:39.210 14:27:21 -- nvmf/run.sh@25 -- # local core=0x1 00:07:39.210 14:27:21 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:39.210 14:27:21 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:07:39.210 14:27:21 -- nvmf/run.sh@29 -- # printf %02d 0 00:07:39.210 14:27:21 -- nvmf/run.sh@29 -- # port=4400 00:07:39.210 14:27:21 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:39.210 14:27:21 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:07:39.210 14:27:21 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:39.210 14:27:21 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 -r /var/tmp/spdk0.sock 00:07:39.210 [2024-10-01 14:27:21.607353] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:39.210 [2024-10-01 14:27:21.607452] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid700560 ] 00:07:39.210 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.469 [2024-10-01 14:27:21.813939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.469 [2024-10-01 14:27:21.884090] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:39.469 [2024-10-01 14:27:21.884212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.469 [2024-10-01 14:27:21.942501] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.469 [2024-10-01 14:27:21.958708] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:07:39.469 INFO: Running with entropic power schedule (0xFF, 100). 00:07:39.469 INFO: Seed: 502922113 00:07:39.727 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:39.727 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:39.727 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:39.727 INFO: A corpus is not provided, starting from an empty corpus 00:07:39.727 #2 INITED exec/s: 0 rss: 61Mb 00:07:39.727 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:39.727 This may also happen if the target rejected all inputs we tried so far 00:07:39.727 [2024-10-01 14:27:22.014025] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:39.727 [2024-10-01 14:27:22.014058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.985 NEW_FUNC[1/670]: 0x43a858 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:07:39.985 NEW_FUNC[2/670]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:39.985 #8 NEW cov: 11541 ft: 11531 corp: 2/66b lim: 320 exec/s: 0 rss: 68Mb L: 65/65 MS: 1 InsertRepeatedBytes- 00:07:39.985 [2024-10-01 14:27:22.335040] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:39.985 [2024-10-01 14:27:22.335101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.985 #9 NEW cov: 11654 ft: 12151 corp: 3/132b lim: 320 exec/s: 0 rss: 68Mb L: 66/66 MS: 1 CrossOver- 00:07:39.985 [2024-10-01 14:27:22.384955] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:39.985 [2024-10-01 14:27:22.384981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.985 #10 NEW cov: 11660 ft: 12482 corp: 4/209b lim: 320 exec/s: 0 rss: 68Mb L: 77/77 MS: 1 CrossOver- 00:07:39.986 [2024-10-01 14:27:22.425029] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:39.986 [2024-10-01 14:27:22.425054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.986 #11 NEW cov: 11745 ft: 12745 corp: 5/275b lim: 320 exec/s: 0 rss: 68Mb L: 66/77 MS: 1 ChangeBinInt- 00:07:39.986 [2024-10-01 14:27:22.465155] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:39.986 [2024-10-01 14:27:22.465181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.986 #12 NEW cov: 11745 ft: 12872 corp: 6/341b lim: 320 exec/s: 0 rss: 68Mb L: 66/77 MS: 1 ChangeBit- 00:07:39.986 [2024-10-01 14:27:22.505267] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xfffffdffffffffff 00:07:39.986 [2024-10-01 14:27:22.505293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.244 #13 NEW cov: 11745 ft: 12918 corp: 7/413b lim: 320 exec/s: 0 rss: 69Mb L: 72/77 MS: 1 CopyPart- 00:07:40.244 [2024-10-01 14:27:22.545523] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x6868686868686868 00:07:40.244 [2024-10-01 14:27:22.545549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.244 [2024-10-01 14:27:22.545607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (68) qid:0 cid:5 nsid:68686868 cdw10:68686868 cdw11:68686868 00:07:40.244 [2024-10-01 14:27:22.545621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.244 #20 NEW cov: 11768 ft: 13158 corp: 8/602b lim: 320 exec/s: 0 rss: 69Mb L: 189/189 MS: 2 EraseBytes-InsertRepeatedBytes- 00:07:40.244 [2024-10-01 14:27:22.585538] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xfffddcffffffffff 00:07:40.244 [2024-10-01 14:27:22.585564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.244 #21 NEW cov: 11768 ft: 13206 corp: 9/675b lim: 320 exec/s: 0 rss: 69Mb L: 73/189 MS: 1 InsertByte- 00:07:40.244 [2024-10-01 14:27:22.625757] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x6868686868686868 00:07:40.244 [2024-10-01 14:27:22.625783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.244 [2024-10-01 14:27:22.625841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (68) qid:0 cid:5 nsid:68686868 cdw10:68686868 cdw11:68686868 00:07:40.244 [2024-10-01 14:27:22.625855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.244 #22 NEW cov: 11768 ft: 13298 corp: 10/864b lim: 320 exec/s: 0 rss: 69Mb L: 189/189 MS: 1 ChangeByte- 00:07:40.244 [2024-10-01 14:27:22.665738] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:40.244 [2024-10-01 14:27:22.665763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.244 #23 NEW cov: 11768 ft: 13355 corp: 11/930b lim: 320 exec/s: 0 rss: 69Mb L: 66/189 MS: 1 ShuffleBytes- 00:07:40.244 [2024-10-01 14:27:22.705893] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:40.244 [2024-10-01 14:27:22.705919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.244 #24 NEW cov: 11768 ft: 13394 corp: 12/1013b lim: 320 exec/s: 0 rss: 69Mb L: 83/189 MS: 1 CrossOver- 00:07:40.244 [2024-10-01 14:27:22.745979] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.244 [2024-10-01 14:27:22.746004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.244 #29 NEW cov: 11768 ft: 13408 corp: 13/1126b lim: 320 exec/s: 0 rss: 69Mb L: 113/189 MS: 5 EraseBytes-ChangeByte-InsertByte-ChangeBinInt-InsertRepeatedBytes- 00:07:40.501 [2024-10-01 14:27:22.786127] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xfffddcffffffffff 00:07:40.501 [2024-10-01 14:27:22.786153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.501 #30 NEW cov: 11768 ft: 13437 corp: 14/1199b lim: 320 exec/s: 0 rss: 69Mb L: 73/189 MS: 1 ChangeBit- 00:07:40.501 [2024-10-01 14:27:22.826249] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:40.501 [2024-10-01 14:27:22.826274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.501 #46 NEW cov: 11768 ft: 13478 corp: 15/1276b lim: 320 exec/s: 0 rss: 69Mb L: 77/189 MS: 1 ChangeBit- 00:07:40.501 [2024-10-01 14:27:22.866341] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x3 00:07:40.501 [2024-10-01 14:27:22.866366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.501 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:40.501 #47 NEW cov: 11791 ft: 13498 corp: 16/1397b lim: 320 exec/s: 0 rss: 69Mb L: 121/189 MS: 1 CMP- DE: "\344\003\000\000\000\000\000\000"- 00:07:40.501 [2024-10-01 14:27:22.906469] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x3 00:07:40.501 [2024-10-01 14:27:22.906493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.501 #48 NEW cov: 11791 ft: 13519 corp: 17/1518b lim: 320 exec/s: 0 rss: 69Mb L: 121/189 MS: 1 CopyPart- 00:07:40.501 [2024-10-01 14:27:22.946573] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ff28ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:40.501 [2024-10-01 14:27:22.946598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.501 #49 NEW cov: 11791 ft: 13541 corp: 18/1585b lim: 320 exec/s: 0 rss: 69Mb L: 67/189 MS: 1 InsertByte- 00:07:40.501 [2024-10-01 14:27:22.986869] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:40.501 [2024-10-01 14:27:22.986894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.501 [2024-10-01 14:27:22.986951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (68) qid:0 cid:5 nsid:68686868 cdw10:68686868 cdw11:68686868 00:07:40.501 [2024-10-01 14:27:22.986964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.501 #50 NEW cov: 11791 ft: 13559 corp: 19/1761b lim: 320 exec/s: 50 rss: 69Mb L: 176/189 MS: 1 CrossOver- 00:07:40.759 [2024-10-01 14:27:23.026857] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xfffffdffffffffff 00:07:40.759 [2024-10-01 14:27:23.026883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.759 #56 NEW cov: 11791 ft: 13580 corp: 20/1833b lim: 320 exec/s: 56 rss: 69Mb L: 72/189 MS: 1 CopyPart- 00:07:40.759 [2024-10-01 14:27:23.067120] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffff 00:07:40.759 [2024-10-01 14:27:23.067146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.759 [2024-10-01 14:27:23.067205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff 00:07:40.759 [2024-10-01 14:27:23.067219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.759 [2024-10-01 14:27:23.067276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ff0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:40.759 [2024-10-01 14:27:23.067290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.759 NEW_FUNC[1/1]: 0x12c5228 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2014 00:07:40.759 #57 NEW cov: 11823 ft: 13781 corp: 21/2029b lim: 320 exec/s: 57 rss: 69Mb L: 196/196 MS: 1 CrossOver- 00:07:40.759 [2024-10-01 14:27:23.107079] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:40.759 [2024-10-01 14:27:23.107109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.759 #58 NEW cov: 11823 ft: 13881 corp: 22/2095b lim: 320 exec/s: 58 rss: 69Mb L: 66/196 MS: 1 CopyPart- 00:07:40.759 [2024-10-01 14:27:23.147283] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x3e4 00:07:40.759 [2024-10-01 14:27:23.147309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.759 [2024-10-01 14:27:23.147364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ffffff00 00:07:40.759 [2024-10-01 14:27:23.147377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.759 #59 NEW cov: 11823 ft: 13909 corp: 23/2224b lim: 320 exec/s: 59 rss: 69Mb L: 129/196 MS: 1 PersAutoDict- DE: "\344\003\000\000\000\000\000\000"- 00:07:40.759 [2024-10-01 14:27:23.187399] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:40.759 [2024-10-01 14:27:23.187424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.759 [2024-10-01 14:27:23.187483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (68) qid:0 cid:5 nsid:68686868 cdw10:68686868 cdw11:68686868 00:07:40.759 [2024-10-01 14:27:23.187497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.759 #60 NEW cov: 11823 ft: 13914 corp: 24/2392b lim: 320 exec/s: 60 rss: 69Mb L: 168/196 MS: 1 EraseBytes- 00:07:40.759 [2024-10-01 14:27:23.227510] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x3 00:07:40.759 [2024-10-01 14:27:23.227535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.759 [2024-10-01 14:27:23.227595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffff 00:07:40.759 [2024-10-01 14:27:23.227609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.759 #61 NEW cov: 11823 ft: 13935 corp: 25/2545b lim: 320 exec/s: 61 rss: 69Mb L: 153/196 MS: 1 InsertRepeatedBytes- 00:07:40.759 [2024-10-01 14:27:23.267528] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xfffffdffffffffff 00:07:40.759 [2024-10-01 14:27:23.267553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.018 #67 NEW cov: 11823 ft: 13941 corp: 26/2617b lim: 320 exec/s: 67 rss: 69Mb L: 72/196 MS: 1 ShuffleBytes- 00:07:41.018 [2024-10-01 14:27:23.297612] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ff28ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:41.018 [2024-10-01 14:27:23.297637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.018 #68 NEW cov: 11823 ft: 13944 corp: 27/2684b lim: 320 exec/s: 68 rss: 69Mb L: 67/196 MS: 1 ChangeByte- 00:07:41.018 [2024-10-01 14:27:23.337774] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:41.018 [2024-10-01 14:27:23.337800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.018 #69 NEW cov: 11823 ft: 13957 corp: 28/2750b lim: 320 exec/s: 69 rss: 69Mb L: 66/196 MS: 1 ChangeBinInt- 00:07:41.018 [2024-10-01 14:27:23.377888] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:41.018 [2024-10-01 14:27:23.377914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.018 #70 NEW cov: 11823 ft: 13976 corp: 29/2827b lim: 320 exec/s: 70 rss: 69Mb L: 77/196 MS: 1 ChangeBit- 00:07:41.018 [2024-10-01 14:27:23.417957] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x3e4 00:07:41.018 [2024-10-01 14:27:23.417983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.018 #71 NEW cov: 11823 ft: 13982 corp: 30/2904b lim: 320 exec/s: 71 rss: 70Mb L: 77/196 MS: 1 EraseBytes- 00:07:41.018 [2024-10-01 14:27:23.458111] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x3e4 00:07:41.018 [2024-10-01 14:27:23.458137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.018 #72 NEW cov: 11823 ft: 13996 corp: 31/3026b lim: 320 exec/s: 72 rss: 70Mb L: 122/196 MS: 1 InsertByte- 00:07:41.018 [2024-10-01 14:27:23.498181] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:41.018 [2024-10-01 14:27:23.498206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.018 #73 NEW cov: 11823 ft: 14002 corp: 32/3137b lim: 320 exec/s: 73 rss: 70Mb L: 111/196 MS: 1 InsertRepeatedBytes- 00:07:41.018 [2024-10-01 14:27:23.538407] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:03e40000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x3e4 00:07:41.018 [2024-10-01 14:27:23.538434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.276 #74 NEW cov: 11823 ft: 14061 corp: 33/3214b lim: 320 exec/s: 74 rss: 70Mb L: 77/196 MS: 1 PersAutoDict- DE: "\344\003\000\000\000\000\000\000"- 00:07:41.276 [2024-10-01 14:27:23.578437] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ff28ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:41.276 [2024-10-01 14:27:23.578465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.276 #75 NEW cov: 11823 ft: 14075 corp: 34/3281b lim: 320 exec/s: 75 rss: 70Mb L: 67/196 MS: 1 ChangeBinInt- 00:07:41.276 [2024-10-01 14:27:23.618564] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x3 00:07:41.276 [2024-10-01 14:27:23.618590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.276 #76 NEW cov: 11823 ft: 14094 corp: 35/3357b lim: 320 exec/s: 76 rss: 70Mb L: 76/196 MS: 1 EraseBytes- 00:07:41.276 [2024-10-01 14:27:23.648652] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xfffffdffffffffff 00:07:41.276 [2024-10-01 14:27:23.648681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.276 #77 NEW cov: 11823 ft: 14106 corp: 36/3429b lim: 320 exec/s: 77 rss: 70Mb L: 72/196 MS: 1 ChangeBit- 00:07:41.276 [2024-10-01 14:27:23.688887] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xaffffffffffffff 00:07:41.276 [2024-10-01 14:27:23.688912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.277 [2024-10-01 14:27:23.688976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:fffffdff SGL TRANSPORT DATA BLOCK TRANSPORT 0xfffff8ffffffffff 00:07:41.277 [2024-10-01 14:27:23.688991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.277 #78 NEW cov: 11823 ft: 14120 corp: 37/3567b lim: 320 exec/s: 78 rss: 70Mb L: 138/196 MS: 1 CrossOver- 00:07:41.277 [2024-10-01 14:27:23.728904] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xfffddcffffffffff 00:07:41.277 [2024-10-01 14:27:23.728930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.277 #84 NEW cov: 11823 ft: 14126 corp: 38/3641b lim: 320 exec/s: 84 rss: 70Mb L: 74/196 MS: 1 InsertByte- 00:07:41.277 [2024-10-01 14:27:23.769008] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.277 [2024-10-01 14:27:23.769032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.277 [2024-10-01 14:27:23.799112] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.277 [2024-10-01 14:27:23.799137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.535 #86 NEW cov: 11823 ft: 14146 corp: 39/3738b lim: 320 exec/s: 86 rss: 70Mb L: 97/196 MS: 2 ChangeBinInt-EraseBytes- 00:07:41.535 [2024-10-01 14:27:23.839233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (31) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.535 [2024-10-01 14:27:23.839257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.535 NEW_FUNC[1/1]: 0x16c4058 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:07:41.535 #87 NEW cov: 11836 ft: 14479 corp: 40/3813b lim: 320 exec/s: 87 rss: 70Mb L: 75/196 MS: 1 InsertByte- 00:07:41.535 [2024-10-01 14:27:23.889373] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:41.535 [2024-10-01 14:27:23.889399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.535 #88 NEW cov: 11836 ft: 14540 corp: 41/3891b lim: 320 exec/s: 88 rss: 70Mb L: 78/196 MS: 1 InsertByte- 00:07:41.535 [2024-10-01 14:27:23.929438] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:41.535 [2024-10-01 14:27:23.929462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.535 #89 NEW cov: 11836 ft: 14553 corp: 42/3956b lim: 320 exec/s: 89 rss: 70Mb L: 65/196 MS: 1 EraseBytes- 00:07:41.535 [2024-10-01 14:27:23.969578] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:03e40000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x3e4 00:07:41.535 [2024-10-01 14:27:23.969606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.535 #95 NEW cov: 11836 ft: 14564 corp: 43/4033b lim: 320 exec/s: 95 rss: 70Mb L: 77/196 MS: 1 ChangeBit- 00:07:41.536 [2024-10-01 14:27:24.009690] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xfffddcffffffffff 00:07:41.536 [2024-10-01 14:27:24.009715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.536 #96 NEW cov: 11836 ft: 14567 corp: 44/4107b lim: 320 exec/s: 48 rss: 70Mb L: 74/196 MS: 1 ChangeByte- 00:07:41.536 #96 DONE cov: 11836 ft: 14567 corp: 44/4107b lim: 320 exec/s: 48 rss: 70Mb 00:07:41.536 ###### Recommended dictionary. ###### 00:07:41.536 "\344\003\000\000\000\000\000\000" # Uses: 2 00:07:41.536 ###### End of recommended dictionary. ###### 00:07:41.536 Done 96 runs in 2 second(s) 00:07:41.795 14:27:24 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_0.conf 00:07:41.795 14:27:24 -- ../common.sh@72 -- # (( i++ )) 00:07:41.795 14:27:24 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:41.795 14:27:24 -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:41.795 14:27:24 -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:07:41.795 14:27:24 -- nvmf/run.sh@24 -- # local timen=1 00:07:41.795 14:27:24 -- nvmf/run.sh@25 -- # local core=0x1 00:07:41.795 14:27:24 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:41.795 14:27:24 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:07:41.795 14:27:24 -- nvmf/run.sh@29 -- # printf %02d 1 00:07:41.795 14:27:24 -- nvmf/run.sh@29 -- # port=4401 00:07:41.795 14:27:24 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:41.795 14:27:24 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:07:41.795 14:27:24 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:41.795 14:27:24 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 -r /var/tmp/spdk1.sock 00:07:41.795 [2024-10-01 14:27:24.212416] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:41.795 [2024-10-01 14:27:24.212494] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid700925 ] 00:07:41.795 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.054 [2024-10-01 14:27:24.524884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.312 [2024-10-01 14:27:24.610624] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:42.312 [2024-10-01 14:27:24.610772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.312 [2024-10-01 14:27:24.669131] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.312 [2024-10-01 14:27:24.685366] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:07:42.312 INFO: Running with entropic power schedule (0xFF, 100). 00:07:42.312 INFO: Seed: 3226912237 00:07:42.312 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:42.312 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:42.313 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:42.313 INFO: A corpus is not provided, starting from an empty corpus 00:07:42.313 #2 INITED exec/s: 0 rss: 61Mb 00:07:42.313 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:42.313 This may also happen if the target rejected all inputs we tried so far 00:07:42.313 [2024-10-01 14:27:24.733883] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (36868) > buf size (4096) 00:07:42.313 [2024-10-01 14:27:24.734091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:24000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.313 [2024-10-01 14:27:24.734122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.572 NEW_FUNC[1/671]: 0x43b158 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:07:42.572 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:42.572 #5 NEW cov: 11639 ft: 11640 corp: 2/11b lim: 30 exec/s: 0 rss: 68Mb L: 10/10 MS: 3 InsertByte-EraseBytes-InsertRepeatedBytes- 00:07:42.572 [2024-10-01 14:27:25.044755] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:42.572 [2024-10-01 14:27:25.044889] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:42.572 [2024-10-01 14:27:25.044997] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff0a 00:07:42.572 [2024-10-01 14:27:25.045202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.572 [2024-10-01 14:27:25.045235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.572 [2024-10-01 14:27:25.045288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.572 [2024-10-01 14:27:25.045303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.572 [2024-10-01 14:27:25.045356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.572 [2024-10-01 14:27:25.045369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.572 #7 NEW cov: 11758 ft: 12373 corp: 3/30b lim: 30 exec/s: 0 rss: 68Mb L: 19/19 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:42.572 [2024-10-01 14:27:25.084784] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:42.572 [2024-10-01 14:27:25.084903] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:42.572 [2024-10-01 14:27:25.085012] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:07:42.572 [2024-10-01 14:27:25.085231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.572 [2024-10-01 14:27:25.085257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.572 [2024-10-01 14:27:25.085310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.572 [2024-10-01 14:27:25.085324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.572 [2024-10-01 14:27:25.085376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.572 [2024-10-01 14:27:25.085389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.831 #8 NEW cov: 11764 ft: 12715 corp: 4/53b lim: 30 exec/s: 0 rss: 68Mb L: 23/23 MS: 1 InsertRepeatedBytes- 00:07:42.831 [2024-10-01 14:27:25.134893] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:42.831 [2024-10-01 14:27:25.135013] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:42.831 [2024-10-01 14:27:25.135123] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:42.831 [2024-10-01 14:27:25.135328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.831 [2024-10-01 14:27:25.135358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.831 [2024-10-01 14:27:25.135412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff0a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.831 [2024-10-01 14:27:25.135425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.831 [2024-10-01 14:27:25.135479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.831 [2024-10-01 14:27:25.135492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.831 #9 NEW cov: 11849 ft: 13032 corp: 5/71b lim: 30 exec/s: 0 rss: 68Mb L: 18/23 MS: 1 CrossOver- 00:07:42.831 [2024-10-01 14:27:25.175039] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:42.831 [2024-10-01 14:27:25.175161] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1047596) > buf size (4096) 00:07:42.831 [2024-10-01 14:27:25.175268] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:42.831 [2024-10-01 14:27:25.175491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.831 [2024-10-01 14:27:25.175518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.831 [2024-10-01 14:27:25.175574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff0a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.831 [2024-10-01 14:27:25.175590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.831 [2024-10-01 14:27:25.175643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:06ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.831 [2024-10-01 14:27:25.175656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.831 #10 NEW cov: 11849 ft: 13104 corp: 6/89b lim: 30 exec/s: 0 rss: 68Mb L: 18/23 MS: 1 ChangeBinInt- 00:07:42.831 [2024-10-01 14:27:25.215133] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003dff 00:07:42.831 [2024-10-01 14:27:25.215251] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1047596) > buf size (4096) 00:07:42.831 [2024-10-01 14:27:25.215362] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:42.831 [2024-10-01 14:27:25.215564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.831 [2024-10-01 14:27:25.215589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.831 [2024-10-01 14:27:25.215644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff0a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.831 [2024-10-01 14:27:25.215658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.831 [2024-10-01 14:27:25.215713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:06ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.831 [2024-10-01 14:27:25.215734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.831 #11 NEW cov: 11849 ft: 13224 corp: 7/107b lim: 30 exec/s: 0 rss: 69Mb L: 18/23 MS: 1 ChangeByte- 00:07:42.831 [2024-10-01 14:27:25.255190] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:07:42.831 [2024-10-01 14:27:25.255404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.831 [2024-10-01 14:27:25.255430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.831 #14 NEW cov: 11849 ft: 13321 corp: 8/115b lim: 30 exec/s: 0 rss: 69Mb L: 8/23 MS: 3 ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:07:42.831 [2024-10-01 14:27:25.295359] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:42.831 [2024-10-01 14:27:25.295476] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:07:42.831 [2024-10-01 14:27:25.295585] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:42.831 [2024-10-01 14:27:25.295800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.831 [2024-10-01 14:27:25.295825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.831 [2024-10-01 14:27:25.295880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.831 [2024-10-01 14:27:25.295894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.832 [2024-10-01 14:27:25.295945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.832 [2024-10-01 14:27:25.295959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.832 #15 NEW cov: 11849 ft: 13344 corp: 9/135b lim: 30 exec/s: 0 rss: 69Mb L: 20/23 MS: 1 CrossOver- 00:07:42.832 [2024-10-01 14:27:25.335412] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (36868) > buf size (4096) 00:07:42.832 [2024-10-01 14:27:25.335626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:24000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.832 [2024-10-01 14:27:25.335650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.091 #16 NEW cov: 11849 ft: 13382 corp: 10/145b lim: 30 exec/s: 0 rss: 69Mb L: 10/23 MS: 1 ChangeBit- 00:07:43.091 [2024-10-01 14:27:25.375546] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000b2b2 00:07:43.091 [2024-10-01 14:27:25.375681] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000b2b2 00:07:43.091 [2024-10-01 14:27:25.375906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:b2b202b2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.091 [2024-10-01 14:27:25.375933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.091 [2024-10-01 14:27:25.375988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:b2b202b2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.091 [2024-10-01 14:27:25.376002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.091 #17 NEW cov: 11849 ft: 13696 corp: 11/161b lim: 30 exec/s: 0 rss: 69Mb L: 16/23 MS: 1 InsertRepeatedBytes- 00:07:43.091 [2024-10-01 14:27:25.415715] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.091 [2024-10-01 14:27:25.415840] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:07:43.091 [2024-10-01 14:27:25.415947] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.091 [2024-10-01 14:27:25.416154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.091 [2024-10-01 14:27:25.416182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.091 [2024-10-01 14:27:25.416236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.091 [2024-10-01 14:27:25.416250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.091 [2024-10-01 14:27:25.416303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.091 [2024-10-01 14:27:25.416316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.091 #18 NEW cov: 11849 ft: 13728 corp: 12/181b lim: 30 exec/s: 0 rss: 69Mb L: 20/23 MS: 1 ChangeByte- 00:07:43.091 [2024-10-01 14:27:25.455803] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.091 [2024-10-01 14:27:25.455919] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:07:43.091 [2024-10-01 14:27:25.456027] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.091 [2024-10-01 14:27:25.456232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.091 [2024-10-01 14:27:25.456257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.091 [2024-10-01 14:27:25.456310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.091 [2024-10-01 14:27:25.456325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.091 [2024-10-01 14:27:25.456376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.091 [2024-10-01 14:27:25.456389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.091 #24 NEW cov: 11849 ft: 13750 corp: 13/201b lim: 30 exec/s: 0 rss: 69Mb L: 20/23 MS: 1 ChangeByte- 00:07:43.091 [2024-10-01 14:27:25.495984] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.091 [2024-10-01 14:27:25.496100] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.091 [2024-10-01 14:27:25.496209] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff0a 00:07:43.091 [2024-10-01 14:27:25.496427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.091 [2024-10-01 14:27:25.496451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.091 [2024-10-01 14:27:25.496508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.091 [2024-10-01 14:27:25.496523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.091 [2024-10-01 14:27:25.496576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.091 [2024-10-01 14:27:25.496589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.091 #25 NEW cov: 11849 ft: 13795 corp: 14/220b lim: 30 exec/s: 0 rss: 69Mb L: 19/23 MS: 1 ShuffleBytes- 00:07:43.091 [2024-10-01 14:27:25.536011] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.091 [2024-10-01 14:27:25.536129] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.091 [2024-10-01 14:27:25.536336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:24008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.091 [2024-10-01 14:27:25.536361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.091 [2024-10-01 14:27:25.536414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.091 [2024-10-01 14:27:25.536428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.091 #28 NEW cov: 11849 ft: 13844 corp: 15/234b lim: 30 exec/s: 0 rss: 69Mb L: 14/23 MS: 3 EraseBytes-EraseBytes-InsertRepeatedBytes- 00:07:43.091 [2024-10-01 14:27:25.576202] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.091 [2024-10-01 14:27:25.576335] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.091 [2024-10-01 14:27:25.576443] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.091 [2024-10-01 14:27:25.576549] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.091 [2024-10-01 14:27:25.576758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.091 [2024-10-01 14:27:25.576786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.091 [2024-10-01 14:27:25.576840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff0a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.091 [2024-10-01 14:27:25.576854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.092 [2024-10-01 14:27:25.576908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.092 [2024-10-01 14:27:25.576921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.092 [2024-10-01 14:27:25.576973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:000683ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.092 [2024-10-01 14:27:25.576987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.092 #29 NEW cov: 11849 ft: 14319 corp: 16/259b lim: 30 exec/s: 0 rss: 69Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:07:43.351 [2024-10-01 14:27:25.616266] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000b2b2 00:07:43.351 [2024-10-01 14:27:25.616396] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000b2b2 00:07:43.351 [2024-10-01 14:27:25.616596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:b2b202b2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.351 [2024-10-01 14:27:25.616622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.351 [2024-10-01 14:27:25.616674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:b3b202b2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.351 [2024-10-01 14:27:25.616688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.351 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:43.351 #30 NEW cov: 11872 ft: 14444 corp: 17/275b lim: 30 exec/s: 0 rss: 69Mb L: 16/25 MS: 1 ChangeBit- 00:07:43.351 [2024-10-01 14:27:25.666447] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:07:43.351 [2024-10-01 14:27:25.666564] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009d9d 00:07:43.351 [2024-10-01 14:27:25.666678] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009d9d 00:07:43.351 [2024-10-01 14:27:25.666898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.351 [2024-10-01 14:27:25.666924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.351 [2024-10-01 14:27:25.666979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:9d9d819d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.351 [2024-10-01 14:27:25.666993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.351 [2024-10-01 14:27:25.667046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:9d9d819d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.351 [2024-10-01 14:27:25.667059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.351 #31 NEW cov: 11872 ft: 14502 corp: 18/298b lim: 30 exec/s: 0 rss: 69Mb L: 23/25 MS: 1 InsertRepeatedBytes- 00:07:43.351 [2024-10-01 14:27:25.706528] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003dff 00:07:43.351 [2024-10-01 14:27:25.706645] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.351 [2024-10-01 14:27:25.706858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.351 [2024-10-01 14:27:25.706884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.351 [2024-10-01 14:27:25.706938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff0a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.351 [2024-10-01 14:27:25.706952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.351 #37 NEW cov: 11872 ft: 14521 corp: 19/310b lim: 30 exec/s: 37 rss: 69Mb L: 12/25 MS: 1 EraseBytes- 00:07:43.351 [2024-10-01 14:27:25.746623] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003dff 00:07:43.351 [2024-10-01 14:27:25.746750] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1047596) > buf size (4096) 00:07:43.351 [2024-10-01 14:27:25.746954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.351 [2024-10-01 14:27:25.746980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.351 [2024-10-01 14:27:25.747034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff0a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.351 [2024-10-01 14:27:25.747047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.351 #38 NEW cov: 11872 ft: 14562 corp: 20/327b lim: 30 exec/s: 38 rss: 69Mb L: 17/25 MS: 1 InsertRepeatedBytes- 00:07:43.351 [2024-10-01 14:27:25.786806] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.351 [2024-10-01 14:27:25.786923] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.351 [2024-10-01 14:27:25.787033] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.351 [2024-10-01 14:27:25.787139] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.351 [2024-10-01 14:27:25.787359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.351 [2024-10-01 14:27:25.787384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.351 [2024-10-01 14:27:25.787442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff0a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.351 [2024-10-01 14:27:25.787455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.351 [2024-10-01 14:27:25.787510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.351 [2024-10-01 14:27:25.787524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.351 [2024-10-01 14:27:25.787575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:000683ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.351 [2024-10-01 14:27:25.787588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.351 #39 NEW cov: 11872 ft: 14590 corp: 21/352b lim: 30 exec/s: 39 rss: 69Mb L: 25/25 MS: 1 ChangeByte- 00:07:43.351 [2024-10-01 14:27:25.836850] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.351 [2024-10-01 14:27:25.837061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.351 [2024-10-01 14:27:25.837085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.351 #40 NEW cov: 11872 ft: 14594 corp: 22/362b lim: 30 exec/s: 40 rss: 69Mb L: 10/25 MS: 1 CrossOver- 00:07:43.351 [2024-10-01 14:27:25.866974] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.351 [2024-10-01 14:27:25.867093] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:07:43.351 [2024-10-01 14:27:25.867203] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.351 [2024-10-01 14:27:25.867421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.351 [2024-10-01 14:27:25.867446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.351 [2024-10-01 14:27:25.867501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.351 [2024-10-01 14:27:25.867514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.351 [2024-10-01 14:27:25.867568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.351 [2024-10-01 14:27:25.867581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.610 #41 NEW cov: 11872 ft: 14605 corp: 23/382b lim: 30 exec/s: 41 rss: 69Mb L: 20/25 MS: 1 CopyPart- 00:07:43.610 [2024-10-01 14:27:25.907020] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000006 00:07:43.610 [2024-10-01 14:27:25.907231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.610 [2024-10-01 14:27:25.907256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.610 #42 NEW cov: 11872 ft: 14635 corp: 24/389b lim: 30 exec/s: 42 rss: 69Mb L: 7/25 MS: 1 EraseBytes- 00:07:43.610 [2024-10-01 14:27:25.947184] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000b2b2 00:07:43.610 [2024-10-01 14:27:25.947305] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000b2b2 00:07:43.610 [2024-10-01 14:27:25.947512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:b2b202b2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.610 [2024-10-01 14:27:25.947542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.610 [2024-10-01 14:27:25.947594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:b3b202b2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.610 [2024-10-01 14:27:25.947608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.610 #43 NEW cov: 11872 ft: 14720 corp: 25/403b lim: 30 exec/s: 43 rss: 69Mb L: 14/25 MS: 1 EraseBytes- 00:07:43.610 [2024-10-01 14:27:25.987283] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.610 [2024-10-01 14:27:25.987400] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.610 [2024-10-01 14:27:25.987606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.610 [2024-10-01 14:27:25.987631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.610 [2024-10-01 14:27:25.987682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff0a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.610 [2024-10-01 14:27:25.987697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.610 #44 NEW cov: 11872 ft: 14734 corp: 26/417b lim: 30 exec/s: 44 rss: 69Mb L: 14/25 MS: 1 EraseBytes- 00:07:43.610 [2024-10-01 14:27:26.027395] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:07:43.610 [2024-10-01 14:27:26.027511] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009d9d 00:07:43.610 [2024-10-01 14:27:26.027620] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009d9d 00:07:43.610 [2024-10-01 14:27:26.027835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.610 [2024-10-01 14:27:26.027859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.610 [2024-10-01 14:27:26.027913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:9d9d819d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.610 [2024-10-01 14:27:26.027926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.610 [2024-10-01 14:27:26.027979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:9d9d8195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.610 [2024-10-01 14:27:26.027993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.610 #45 NEW cov: 11872 ft: 14753 corp: 27/440b lim: 30 exec/s: 45 rss: 70Mb L: 23/25 MS: 1 ChangeBinInt- 00:07:43.610 [2024-10-01 14:27:26.067549] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.610 [2024-10-01 14:27:26.067667] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.610 [2024-10-01 14:27:26.067781] ctrlr.c:2547:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (65280) > len (4) 00:07:43.610 [2024-10-01 14:27:26.067995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.610 [2024-10-01 14:27:26.068020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.610 [2024-10-01 14:27:26.068075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.610 [2024-10-01 14:27:26.068092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.610 [2024-10-01 14:27:26.068146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.610 [2024-10-01 14:27:26.068159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.610 #46 NEW cov: 11885 ft: 14778 corp: 28/463b lim: 30 exec/s: 46 rss: 70Mb L: 23/25 MS: 1 CopyPart- 00:07:43.610 [2024-10-01 14:27:26.107704] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.610 [2024-10-01 14:27:26.107831] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:07:43.610 [2024-10-01 14:27:26.107939] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.610 [2024-10-01 14:27:26.108051] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.610 [2024-10-01 14:27:26.108260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.610 [2024-10-01 14:27:26.108286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.610 [2024-10-01 14:27:26.108342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:19000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.610 [2024-10-01 14:27:26.108356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.610 [2024-10-01 14:27:26.108409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.610 [2024-10-01 14:27:26.108423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.610 [2024-10-01 14:27:26.108474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:000683ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.610 [2024-10-01 14:27:26.108487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.869 #47 NEW cov: 11885 ft: 14801 corp: 29/488b lim: 30 exec/s: 47 rss: 70Mb L: 25/25 MS: 1 ChangeBinInt- 00:07:43.869 [2024-10-01 14:27:26.157792] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.869 [2024-10-01 14:27:26.157909] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.869 [2024-10-01 14:27:26.158019] ctrlr.c:2547:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (65280) > len (4) 00:07:43.869 [2024-10-01 14:27:26.158222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.869 [2024-10-01 14:27:26.158248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.869 [2024-10-01 14:27:26.158301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff7f83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.869 [2024-10-01 14:27:26.158315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.869 [2024-10-01 14:27:26.158368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.869 [2024-10-01 14:27:26.158382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.869 #48 NEW cov: 11885 ft: 14832 corp: 30/511b lim: 30 exec/s: 48 rss: 70Mb L: 23/25 MS: 1 ChangeBit- 00:07:43.869 [2024-10-01 14:27:26.197916] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (36868) > buf size (4096) 00:07:43.869 [2024-10-01 14:27:26.198039] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:07:43.869 [2024-10-01 14:27:26.198346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:24000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.869 [2024-10-01 14:27:26.198371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.869 [2024-10-01 14:27:26.198425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.869 [2024-10-01 14:27:26.198438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.869 [2024-10-01 14:27:26.198490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.869 [2024-10-01 14:27:26.198504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.869 #49 NEW cov: 11895 ft: 14864 corp: 31/531b lim: 30 exec/s: 49 rss: 70Mb L: 20/25 MS: 1 CopyPart- 00:07:43.869 [2024-10-01 14:27:26.238054] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.869 [2024-10-01 14:27:26.238172] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002f2f 00:07:43.870 [2024-10-01 14:27:26.238281] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.870 [2024-10-01 14:27:26.238390] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.870 [2024-10-01 14:27:26.238595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:24008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.870 [2024-10-01 14:27:26.238619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.870 [2024-10-01 14:27:26.238674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:2f2f832f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.870 [2024-10-01 14:27:26.238688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.870 [2024-10-01 14:27:26.238737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:2f2f832f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.870 [2024-10-01 14:27:26.238751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.870 [2024-10-01 14:27:26.238803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.870 [2024-10-01 14:27:26.238817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.870 #50 NEW cov: 11895 ft: 14895 corp: 32/555b lim: 30 exec/s: 50 rss: 70Mb L: 24/25 MS: 1 InsertRepeatedBytes- 00:07:43.870 [2024-10-01 14:27:26.288113] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:07:43.870 [2024-10-01 14:27:26.288342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.870 [2024-10-01 14:27:26.288367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.870 #51 NEW cov: 11895 ft: 14898 corp: 33/563b lim: 30 exec/s: 51 rss: 70Mb L: 8/25 MS: 1 ShuffleBytes- 00:07:43.870 [2024-10-01 14:27:26.328312] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:07:43.870 [2024-10-01 14:27:26.328431] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009d9d 00:07:43.870 [2024-10-01 14:27:26.328545] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009d9d 00:07:43.870 [2024-10-01 14:27:26.328659] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009d9d 00:07:43.870 [2024-10-01 14:27:26.328881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.870 [2024-10-01 14:27:26.328907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.870 [2024-10-01 14:27:26.328962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:9d9d819d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.870 [2024-10-01 14:27:26.328977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.870 [2024-10-01 14:27:26.329032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:9d9d8195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.870 [2024-10-01 14:27:26.329045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.870 [2024-10-01 14:27:26.329099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:9d9d819d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.870 [2024-10-01 14:27:26.329113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.870 #52 NEW cov: 11895 ft: 14906 corp: 34/591b lim: 30 exec/s: 52 rss: 70Mb L: 28/28 MS: 1 CopyPart- 00:07:43.870 [2024-10-01 14:27:26.378438] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000b2b2 00:07:43.870 [2024-10-01 14:27:26.378557] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000b2b2 00:07:43.870 [2024-10-01 14:27:26.378666] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000b2b2 00:07:43.870 [2024-10-01 14:27:26.378895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:b2720272 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.870 [2024-10-01 14:27:26.378920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.870 [2024-10-01 14:27:26.378976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:b2b283b2 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.870 [2024-10-01 14:27:26.378990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.870 [2024-10-01 14:27:26.379044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:b2b202b2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.870 [2024-10-01 14:27:26.379057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.129 #53 NEW cov: 11895 ft: 14921 corp: 35/610b lim: 30 exec/s: 53 rss: 70Mb L: 19/28 MS: 1 InsertRepeatedBytes- 00:07:44.129 [2024-10-01 14:27:26.418728] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (36868) > buf size (4096) 00:07:44.129 [2024-10-01 14:27:26.419090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:24000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.129 [2024-10-01 14:27:26.419116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.129 [2024-10-01 14:27:26.419171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.129 [2024-10-01 14:27:26.419185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.129 #54 NEW cov: 11895 ft: 15016 corp: 36/626b lim: 30 exec/s: 54 rss: 70Mb L: 16/28 MS: 1 EraseBytes- 00:07:44.129 [2024-10-01 14:27:26.468624] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (561156) > buf size (4096) 00:07:44.129 [2024-10-01 14:27:26.468855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2400020a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.129 [2024-10-01 14:27:26.468882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.129 #56 NEW cov: 11895 ft: 15076 corp: 37/632b lim: 30 exec/s: 56 rss: 70Mb L: 6/28 MS: 2 CrossOver-InsertByte- 00:07:44.129 [2024-10-01 14:27:26.508736] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000b2b2 00:07:44.129 [2024-10-01 14:27:26.508853] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000b2b2 00:07:44.129 [2024-10-01 14:27:26.509060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:b2b202b2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.129 [2024-10-01 14:27:26.509085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.129 [2024-10-01 14:27:26.509138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:b3b202b2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.129 [2024-10-01 14:27:26.509152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.129 #57 NEW cov: 11895 ft: 15113 corp: 38/647b lim: 30 exec/s: 57 rss: 70Mb L: 15/28 MS: 1 InsertByte- 00:07:44.129 [2024-10-01 14:27:26.548928] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:44.129 [2024-10-01 14:27:26.549043] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002f2f 00:07:44.129 [2024-10-01 14:27:26.549153] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:44.129 [2024-10-01 14:27:26.549261] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:44.129 [2024-10-01 14:27:26.549482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:24008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.129 [2024-10-01 14:27:26.549507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.129 [2024-10-01 14:27:26.549563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:000083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.129 [2024-10-01 14:27:26.549577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.129 [2024-10-01 14:27:26.549628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:2f2f832f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.129 [2024-10-01 14:27:26.549642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.129 [2024-10-01 14:27:26.549696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.129 [2024-10-01 14:27:26.549708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.129 #58 NEW cov: 11895 ft: 15121 corp: 39/671b lim: 30 exec/s: 58 rss: 70Mb L: 24/28 MS: 1 CopyPart- 00:07:44.129 [2024-10-01 14:27:26.598965] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x6 00:07:44.129 [2024-10-01 14:27:26.599197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.129 [2024-10-01 14:27:26.599222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.129 #59 NEW cov: 11895 ft: 15126 corp: 40/678b lim: 30 exec/s: 59 rss: 70Mb L: 7/28 MS: 1 CMP- DE: "\000\000\000\000"- 00:07:44.129 [2024-10-01 14:27:26.639143] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000b2b2 00:07:44.129 [2024-10-01 14:27:26.639262] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000b2b2 00:07:44.129 [2024-10-01 14:27:26.639472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:b2b202b2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.129 [2024-10-01 14:27:26.639496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.129 [2024-10-01 14:27:26.639550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:b3ff02b2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.129 [2024-10-01 14:27:26.639564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.388 #60 NEW cov: 11895 ft: 15187 corp: 41/693b lim: 30 exec/s: 60 rss: 70Mb L: 15/28 MS: 1 CrossOver- 00:07:44.388 [2024-10-01 14:27:26.689333] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:44.388 [2024-10-01 14:27:26.689452] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1047596) > buf size (4096) 00:07:44.388 [2024-10-01 14:27:26.689558] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:44.388 [2024-10-01 14:27:26.689777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.388 [2024-10-01 14:27:26.689803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.388 [2024-10-01 14:27:26.689857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff0a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.388 [2024-10-01 14:27:26.689871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.388 [2024-10-01 14:27:26.689924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:06ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.388 [2024-10-01 14:27:26.689938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.388 #61 NEW cov: 11895 ft: 15199 corp: 42/713b lim: 30 exec/s: 61 rss: 70Mb L: 20/28 MS: 1 CopyPart- 00:07:44.388 [2024-10-01 14:27:26.729429] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:44.388 [2024-10-01 14:27:26.729549] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:07:44.388 [2024-10-01 14:27:26.729661] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (786436) > buf size (4096) 00:07:44.388 [2024-10-01 14:27:26.729898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.388 [2024-10-01 14:27:26.729924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.388 [2024-10-01 14:27:26.729977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.388 [2024-10-01 14:27:26.729991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.388 [2024-10-01 14:27:26.730045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.388 [2024-10-01 14:27:26.730058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.388 #62 NEW cov: 11895 ft: 15238 corp: 43/733b lim: 30 exec/s: 31 rss: 71Mb L: 20/28 MS: 1 ChangeBinInt- 00:07:44.388 #62 DONE cov: 11895 ft: 15238 corp: 43/733b lim: 30 exec/s: 31 rss: 71Mb 00:07:44.388 ###### Recommended dictionary. ###### 00:07:44.388 "\000\000\000\000" # Uses: 0 00:07:44.388 ###### End of recommended dictionary. ###### 00:07:44.388 Done 62 runs in 2 second(s) 00:07:44.388 14:27:26 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_1.conf 00:07:44.388 14:27:26 -- ../common.sh@72 -- # (( i++ )) 00:07:44.388 14:27:26 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:44.388 14:27:26 -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:44.388 14:27:26 -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:07:44.388 14:27:26 -- nvmf/run.sh@24 -- # local timen=1 00:07:44.388 14:27:26 -- nvmf/run.sh@25 -- # local core=0x1 00:07:44.388 14:27:26 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:44.389 14:27:26 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:07:44.389 14:27:26 -- nvmf/run.sh@29 -- # printf %02d 2 00:07:44.389 14:27:26 -- nvmf/run.sh@29 -- # port=4402 00:07:44.389 14:27:26 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:44.389 14:27:26 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:07:44.389 14:27:26 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:44.648 14:27:26 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 -r /var/tmp/spdk2.sock 00:07:44.648 [2024-10-01 14:27:26.939135] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:44.648 [2024-10-01 14:27:26.939209] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid701294 ] 00:07:44.648 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.906 [2024-10-01 14:27:27.254053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.906 [2024-10-01 14:27:27.346391] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:44.906 [2024-10-01 14:27:27.346530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.906 [2024-10-01 14:27:27.404938] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.906 [2024-10-01 14:27:27.421150] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:07:45.165 INFO: Running with entropic power schedule (0xFF, 100). 00:07:45.165 INFO: Seed: 1669978026 00:07:45.165 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:45.165 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:45.165 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:45.165 INFO: A corpus is not provided, starting from an empty corpus 00:07:45.165 #2 INITED exec/s: 0 rss: 61Mb 00:07:45.165 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:45.165 This may also happen if the target rejected all inputs we tried so far 00:07:45.165 [2024-10-01 14:27:27.476444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.165 [2024-10-01 14:27:27.476476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.423 NEW_FUNC[1/670]: 0x43db78 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:07:45.423 NEW_FUNC[2/670]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:45.423 #3 NEW cov: 11580 ft: 11581 corp: 2/11b lim: 35 exec/s: 0 rss: 68Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:45.423 [2024-10-01 14:27:27.787048] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:45.423 [2024-10-01 14:27:27.787290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.423 [2024-10-01 14:27:27.787328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.423 #4 NEW cov: 11702 ft: 12153 corp: 3/21b lim: 35 exec/s: 0 rss: 68Mb L: 10/10 MS: 1 ShuffleBytes- 00:07:45.423 [2024-10-01 14:27:27.837345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.423 [2024-10-01 14:27:27.837371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.423 #5 NEW cov: 11708 ft: 12376 corp: 4/32b lim: 35 exec/s: 0 rss: 68Mb L: 11/11 MS: 1 InsertByte- 00:07:45.423 [2024-10-01 14:27:27.877434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.423 [2024-10-01 14:27:27.877459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.423 #6 NEW cov: 11793 ft: 12673 corp: 5/40b lim: 35 exec/s: 0 rss: 68Mb L: 8/11 MS: 1 EraseBytes- 00:07:45.423 [2024-10-01 14:27:27.917494] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:45.423 [2024-10-01 14:27:27.917617] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:45.423 [2024-10-01 14:27:27.917837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.423 [2024-10-01 14:27:27.917863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.423 [2024-10-01 14:27:27.917921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.423 [2024-10-01 14:27:27.917937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.423 [2024-10-01 14:27:27.917991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.423 [2024-10-01 14:27:27.918006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.423 #7 NEW cov: 11793 ft: 13147 corp: 6/61b lim: 35 exec/s: 0 rss: 68Mb L: 21/21 MS: 1 CrossOver- 00:07:45.680 [2024-10-01 14:27:27.957655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:7018000a cdw11:e800bfcb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.680 [2024-10-01 14:27:27.957681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.680 #13 NEW cov: 11793 ft: 13285 corp: 7/72b lim: 35 exec/s: 0 rss: 69Mb L: 11/21 MS: 1 CMP- DE: "p\030\277\313\350\340!\000"- 00:07:45.680 [2024-10-01 14:27:27.997774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:000a00f0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.680 [2024-10-01 14:27:27.997798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.680 #14 NEW cov: 11793 ft: 13407 corp: 8/83b lim: 35 exec/s: 0 rss: 69Mb L: 11/21 MS: 1 InsertByte- 00:07:45.680 [2024-10-01 14:27:28.037887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.680 [2024-10-01 14:27:28.037912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.680 #15 NEW cov: 11793 ft: 13455 corp: 9/93b lim: 35 exec/s: 0 rss: 69Mb L: 10/21 MS: 1 ChangeBinInt- 00:07:45.680 [2024-10-01 14:27:28.077964] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:45.680 [2024-10-01 14:27:28.078296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.680 [2024-10-01 14:27:28.078325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.681 [2024-10-01 14:27:28.078383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.681 [2024-10-01 14:27:28.078398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.681 [2024-10-01 14:27:28.078454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:0000008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.681 [2024-10-01 14:27:28.078467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.681 #16 NEW cov: 11793 ft: 13539 corp: 10/115b lim: 35 exec/s: 0 rss: 69Mb L: 22/22 MS: 1 CopyPart- 00:07:45.681 [2024-10-01 14:27:28.118107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000002a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.681 [2024-10-01 14:27:28.118131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.681 #17 NEW cov: 11793 ft: 13575 corp: 11/125b lim: 35 exec/s: 0 rss: 69Mb L: 10/22 MS: 1 ChangeBit- 00:07:45.681 [2024-10-01 14:27:28.158316] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:45.681 [2024-10-01 14:27:28.158572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000002a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.681 [2024-10-01 14:27:28.158597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.681 [2024-10-01 14:27:28.158709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.681 [2024-10-01 14:27:28.158730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.681 #18 NEW cov: 11793 ft: 14126 corp: 12/146b lim: 35 exec/s: 0 rss: 69Mb L: 21/22 MS: 1 CrossOver- 00:07:45.938 [2024-10-01 14:27:28.208410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.938 [2024-10-01 14:27:28.208436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.938 #19 NEW cov: 11793 ft: 14142 corp: 13/156b lim: 35 exec/s: 0 rss: 69Mb L: 10/22 MS: 1 ChangeBinInt- 00:07:45.938 [2024-10-01 14:27:28.248466] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:45.938 [2024-10-01 14:27:28.248803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.938 [2024-10-01 14:27:28.248829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.938 [2024-10-01 14:27:28.248885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0a000000 cdw11:00008000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.938 [2024-10-01 14:27:28.248901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.938 [2024-10-01 14:27:28.248955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:0000008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.938 [2024-10-01 14:27:28.248968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.938 #20 NEW cov: 11793 ft: 14183 corp: 14/178b lim: 35 exec/s: 0 rss: 69Mb L: 22/22 MS: 1 ChangeBit- 00:07:45.938 [2024-10-01 14:27:28.298694] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:45.938 [2024-10-01 14:27:28.298921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:70000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.939 [2024-10-01 14:27:28.298945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.939 [2024-10-01 14:27:28.299001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:cbe800bf cdw11:0000e021 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.939 [2024-10-01 14:27:28.299015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.939 [2024-10-01 14:27:28.299070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.939 [2024-10-01 14:27:28.299085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.939 #21 NEW cov: 11793 ft: 14220 corp: 15/199b lim: 35 exec/s: 0 rss: 69Mb L: 21/22 MS: 1 PersAutoDict- DE: "p\030\277\313\350\340!\000"- 00:07:45.939 [2024-10-01 14:27:28.338756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:002d000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.939 [2024-10-01 14:27:28.338781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.939 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:45.939 #22 NEW cov: 11816 ft: 14277 corp: 16/208b lim: 35 exec/s: 0 rss: 69Mb L: 9/22 MS: 1 InsertByte- 00:07:45.939 [2024-10-01 14:27:28.378886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3200000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.939 [2024-10-01 14:27:28.378911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.939 #23 NEW cov: 11816 ft: 14299 corp: 17/220b lim: 35 exec/s: 0 rss: 69Mb L: 12/22 MS: 1 InsertByte- 00:07:45.939 [2024-10-01 14:27:28.418970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:701800f0 cdw11:e800bfcb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.939 [2024-10-01 14:27:28.418995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.939 #29 NEW cov: 11816 ft: 14315 corp: 18/231b lim: 35 exec/s: 0 rss: 69Mb L: 11/22 MS: 1 PersAutoDict- DE: "p\030\277\313\350\340!\000"- 00:07:45.939 [2024-10-01 14:27:28.459085] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:45.939 [2024-10-01 14:27:28.459419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000002a cdw11:e0000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.939 [2024-10-01 14:27:28.459449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.939 [2024-10-01 14:27:28.459517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00010000 cdw11:0000f000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.939 [2024-10-01 14:27:28.459535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.939 [2024-10-01 14:27:28.459593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.939 [2024-10-01 14:27:28.459607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.208 #30 NEW cov: 11816 ft: 14355 corp: 19/254b lim: 35 exec/s: 30 rss: 69Mb L: 23/23 MS: 1 CrossOver- 00:07:46.208 [2024-10-01 14:27:28.509336] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:46.208 [2024-10-01 14:27:28.509462] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:46.208 [2024-10-01 14:27:28.509684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:7018002a cdw11:e800bfcb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.208 [2024-10-01 14:27:28.509710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.208 [2024-10-01 14:27:28.509771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000021 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.208 [2024-10-01 14:27:28.509786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.208 [2024-10-01 14:27:28.509840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:01f00000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.208 [2024-10-01 14:27:28.509855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.208 [2024-10-01 14:27:28.509910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.208 [2024-10-01 14:27:28.509925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.208 #31 NEW cov: 11816 ft: 14829 corp: 20/283b lim: 35 exec/s: 31 rss: 69Mb L: 29/29 MS: 1 PersAutoDict- DE: "p\030\277\313\350\340!\000"- 00:07:46.208 [2024-10-01 14:27:28.549354] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:46.208 [2024-10-01 14:27:28.549474] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:46.208 [2024-10-01 14:27:28.549585] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:46.208 [2024-10-01 14:27:28.549806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.208 [2024-10-01 14:27:28.549831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.208 [2024-10-01 14:27:28.549886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.208 [2024-10-01 14:27:28.549901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.208 [2024-10-01 14:27:28.549957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:000a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.208 [2024-10-01 14:27:28.549971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.208 [2024-10-01 14:27:28.550028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.208 [2024-10-01 14:27:28.550043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.208 #32 NEW cov: 11816 ft: 14900 corp: 21/312b lim: 35 exec/s: 32 rss: 69Mb L: 29/29 MS: 1 CopyPart- 00:07:46.208 [2024-10-01 14:27:28.589471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00ab000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.208 [2024-10-01 14:27:28.589496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.208 #33 NEW cov: 11816 ft: 14941 corp: 22/322b lim: 35 exec/s: 33 rss: 69Mb L: 10/29 MS: 1 ChangeByte- 00:07:46.208 [2024-10-01 14:27:28.629577] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:46.208 [2024-10-01 14:27:28.629700] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:46.208 [2024-10-01 14:27:28.629815] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:46.208 [2024-10-01 14:27:28.630043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000002a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.208 [2024-10-01 14:27:28.630068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.208 [2024-10-01 14:27:28.630124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.208 [2024-10-01 14:27:28.630140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.208 [2024-10-01 14:27:28.630195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.208 [2024-10-01 14:27:28.630210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.208 [2024-10-01 14:27:28.630260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.208 [2024-10-01 14:27:28.630275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.208 #34 NEW cov: 11816 ft: 14950 corp: 23/350b lim: 35 exec/s: 34 rss: 69Mb L: 28/29 MS: 1 InsertRepeatedBytes- 00:07:46.208 [2024-10-01 14:27:28.669716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00ab000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.208 [2024-10-01 14:27:28.669744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.208 #35 NEW cov: 11816 ft: 14993 corp: 24/360b lim: 35 exec/s: 35 rss: 69Mb L: 10/29 MS: 1 ShuffleBytes- 00:07:46.208 [2024-10-01 14:27:28.709831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.208 [2024-10-01 14:27:28.709857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.208 #36 NEW cov: 11816 ft: 14997 corp: 25/371b lim: 35 exec/s: 36 rss: 69Mb L: 11/29 MS: 1 CrossOver- 00:07:46.466 [2024-10-01 14:27:28.749941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00ab000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.466 [2024-10-01 14:27:28.749967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.466 #37 NEW cov: 11816 ft: 15037 corp: 26/380b lim: 35 exec/s: 37 rss: 69Mb L: 9/29 MS: 1 EraseBytes- 00:07:46.466 [2024-10-01 14:27:28.790026] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:46.466 [2024-10-01 14:27:28.790361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.466 [2024-10-01 14:27:28.790386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.467 [2024-10-01 14:27:28.790445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.467 [2024-10-01 14:27:28.790460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.467 NEW_FUNC[1/3]: 0x10ba328 in spdk_nvmf_ctrlr_identify_iocs_specific /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:2921 00:07:46.467 NEW_FUNC[2/3]: 0x10bac68 in nvmf_ctrlr_identify_iocs_nvm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:2877 00:07:46.467 #38 NEW cov: 11855 ft: 15083 corp: 27/401b lim: 35 exec/s: 38 rss: 69Mb L: 21/29 MS: 1 ChangeBinInt- 00:07:46.467 [2024-10-01 14:27:28.830132] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:46.467 [2024-10-01 14:27:28.830489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:6a00000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.467 [2024-10-01 14:27:28.830514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.467 [2024-10-01 14:27:28.830571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.467 [2024-10-01 14:27:28.830586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.467 #39 NEW cov: 11855 ft: 15092 corp: 28/422b lim: 35 exec/s: 39 rss: 70Mb L: 21/29 MS: 1 ChangeByte- 00:07:46.467 [2024-10-01 14:27:28.880387] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:46.467 [2024-10-01 14:27:28.880508] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:46.467 [2024-10-01 14:27:28.880730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000002a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.467 [2024-10-01 14:27:28.880756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.467 [2024-10-01 14:27:28.880815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.467 [2024-10-01 14:27:28.880829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.467 [2024-10-01 14:27:28.880883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.467 [2024-10-01 14:27:28.880898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.467 [2024-10-01 14:27:28.880955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.467 [2024-10-01 14:27:28.880969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.467 #40 NEW cov: 11855 ft: 15139 corp: 29/450b lim: 35 exec/s: 40 rss: 70Mb L: 28/29 MS: 1 ChangeBit- 00:07:46.467 [2024-10-01 14:27:28.930418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00ab000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.467 [2024-10-01 14:27:28.930443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.467 #41 NEW cov: 11855 ft: 15147 corp: 30/460b lim: 35 exec/s: 41 rss: 70Mb L: 10/29 MS: 1 ShuffleBytes- 00:07:46.467 [2024-10-01 14:27:28.970392] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:46.467 [2024-10-01 14:27:28.970705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.467 [2024-10-01 14:27:28.970736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.467 [2024-10-01 14:27:28.970793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:67e000f5 cdw11:e00055e9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.467 [2024-10-01 14:27:28.970807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.746 #42 NEW cov: 11855 ft: 15228 corp: 31/478b lim: 35 exec/s: 42 rss: 70Mb L: 18/29 MS: 1 CMP- DE: "\365g\340U\351\340!\000"- 00:07:46.746 [2024-10-01 14:27:29.010707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00ab000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.746 [2024-10-01 14:27:29.010738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.746 #43 NEW cov: 11855 ft: 15231 corp: 32/487b lim: 35 exec/s: 43 rss: 70Mb L: 9/29 MS: 1 ChangeBit- 00:07:46.746 [2024-10-01 14:27:29.050830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.746 [2024-10-01 14:27:29.050855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.746 #44 NEW cov: 11855 ft: 15243 corp: 33/498b lim: 35 exec/s: 44 rss: 70Mb L: 11/29 MS: 1 CopyPart- 00:07:46.746 [2024-10-01 14:27:29.090804] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:46.746 [2024-10-01 14:27:29.090925] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:46.746 [2024-10-01 14:27:29.091055] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:46.746 [2024-10-01 14:27:29.091269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.746 [2024-10-01 14:27:29.091297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.746 [2024-10-01 14:27:29.091355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.746 [2024-10-01 14:27:29.091370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.746 [2024-10-01 14:27:29.091425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.746 [2024-10-01 14:27:29.091442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.746 #45 NEW cov: 11855 ft: 15254 corp: 34/525b lim: 35 exec/s: 45 rss: 70Mb L: 27/29 MS: 1 CrossOver- 00:07:46.746 [2024-10-01 14:27:29.131046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:02000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.746 [2024-10-01 14:27:29.131072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.746 #46 NEW cov: 11855 ft: 15265 corp: 35/536b lim: 35 exec/s: 46 rss: 70Mb L: 11/29 MS: 1 ChangeBinInt- 00:07:46.746 [2024-10-01 14:27:29.161025] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:46.746 [2024-10-01 14:27:29.161143] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:46.746 [2024-10-01 14:27:29.161255] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:46.746 [2024-10-01 14:27:29.161367] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:46.746 [2024-10-01 14:27:29.161601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.746 [2024-10-01 14:27:29.161627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.746 [2024-10-01 14:27:29.161683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.746 [2024-10-01 14:27:29.161699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.746 [2024-10-01 14:27:29.161771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.746 [2024-10-01 14:27:29.161786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.746 [2024-10-01 14:27:29.161844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.746 [2024-10-01 14:27:29.161859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.746 #50 NEW cov: 11855 ft: 15316 corp: 36/565b lim: 35 exec/s: 50 rss: 70Mb L: 29/29 MS: 4 ChangeBit-InsertRepeatedBytes-CopyPart-InsertRepeatedBytes- 00:07:46.746 [2024-10-01 14:27:29.201238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00ab000a cdw11:0000007d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.746 [2024-10-01 14:27:29.201263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.746 #51 NEW cov: 11855 ft: 15324 corp: 37/576b lim: 35 exec/s: 51 rss: 70Mb L: 11/29 MS: 1 InsertByte- 00:07:46.746 [2024-10-01 14:27:29.231319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0001000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.746 [2024-10-01 14:27:29.231344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.746 #52 NEW cov: 11855 ft: 15329 corp: 38/586b lim: 35 exec/s: 52 rss: 70Mb L: 10/29 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:07:47.041 [2024-10-01 14:27:29.271449] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:47.041 [2024-10-01 14:27:29.271568] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:47.041 [2024-10-01 14:27:29.271681] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:47.041 [2024-10-01 14:27:29.271902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000002a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.041 [2024-10-01 14:27:29.271928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.041 [2024-10-01 14:27:29.271988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.041 [2024-10-01 14:27:29.272004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.041 [2024-10-01 14:27:29.272061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.041 [2024-10-01 14:27:29.272077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.041 [2024-10-01 14:27:29.272132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.041 [2024-10-01 14:27:29.272146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.041 #53 NEW cov: 11855 ft: 15403 corp: 39/614b lim: 35 exec/s: 53 rss: 70Mb L: 28/29 MS: 1 ChangeByte- 00:07:47.041 [2024-10-01 14:27:29.311593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000002a cdw11:2a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.041 [2024-10-01 14:27:29.311618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.041 #54 NEW cov: 11855 ft: 15411 corp: 40/627b lim: 35 exec/s: 54 rss: 70Mb L: 13/29 MS: 1 CopyPart- 00:07:47.041 [2024-10-01 14:27:29.351666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.041 [2024-10-01 14:27:29.351690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.041 #55 NEW cov: 11855 ft: 15440 corp: 41/634b lim: 35 exec/s: 55 rss: 70Mb L: 7/29 MS: 1 EraseBytes- 00:07:47.041 [2024-10-01 14:27:29.391792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0001000a cdw11:2d000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.041 [2024-10-01 14:27:29.391819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.041 #56 NEW cov: 11855 ft: 15475 corp: 42/646b lim: 35 exec/s: 56 rss: 70Mb L: 12/29 MS: 1 CrossOver- 00:07:47.041 [2024-10-01 14:27:29.432205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1d1d001d cdw11:1d001d1d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.041 [2024-10-01 14:27:29.432230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.041 [2024-10-01 14:27:29.432287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:1d1d001d cdw11:1d001d1d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.041 [2024-10-01 14:27:29.432301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.041 [2024-10-01 14:27:29.432357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:1d1d001d cdw11:1d001d1d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.041 [2024-10-01 14:27:29.432370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.041 #57 NEW cov: 11855 ft: 15500 corp: 43/667b lim: 35 exec/s: 57 rss: 70Mb L: 21/29 MS: 1 InsertRepeatedBytes- 00:07:47.041 [2024-10-01 14:27:29.472040] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:47.041 [2024-10-01 14:27:29.472160] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:47.041 [2024-10-01 14:27:29.472465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000002a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.041 [2024-10-01 14:27:29.472490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.041 [2024-10-01 14:27:29.472547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.041 [2024-10-01 14:27:29.472562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.041 [2024-10-01 14:27:29.472621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.042 [2024-10-01 14:27:29.472635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.042 [2024-10-01 14:27:29.472693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ff0000ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.042 [2024-10-01 14:27:29.472706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.042 #58 NEW cov: 11855 ft: 15505 corp: 44/698b lim: 35 exec/s: 29 rss: 70Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:07:47.042 #58 DONE cov: 11855 ft: 15505 corp: 44/698b lim: 35 exec/s: 29 rss: 70Mb 00:07:47.042 ###### Recommended dictionary. ###### 00:07:47.042 "p\030\277\313\350\340!\000" # Uses: 3 00:07:47.042 "\365g\340U\351\340!\000" # Uses: 0 00:07:47.042 "\001\000\000\000\000\000\000\000" # Uses: 0 00:07:47.042 ###### End of recommended dictionary. ###### 00:07:47.042 Done 58 runs in 2 second(s) 00:07:47.333 14:27:29 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_2.conf 00:07:47.333 14:27:29 -- ../common.sh@72 -- # (( i++ )) 00:07:47.333 14:27:29 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:47.333 14:27:29 -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:47.333 14:27:29 -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:07:47.333 14:27:29 -- nvmf/run.sh@24 -- # local timen=1 00:07:47.333 14:27:29 -- nvmf/run.sh@25 -- # local core=0x1 00:07:47.333 14:27:29 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:47.333 14:27:29 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:07:47.333 14:27:29 -- nvmf/run.sh@29 -- # printf %02d 3 00:07:47.333 14:27:29 -- nvmf/run.sh@29 -- # port=4403 00:07:47.333 14:27:29 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:47.333 14:27:29 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:07:47.334 14:27:29 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:47.334 14:27:29 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 -r /var/tmp/spdk3.sock 00:07:47.334 [2024-10-01 14:27:29.669635] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:47.334 [2024-10-01 14:27:29.669734] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid701662 ] 00:07:47.334 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.600 [2024-10-01 14:27:29.979629] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.600 [2024-10-01 14:27:30.073033] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:47.600 [2024-10-01 14:27:30.073185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.858 [2024-10-01 14:27:30.132208] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.858 [2024-10-01 14:27:30.148422] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:07:47.858 INFO: Running with entropic power schedule (0xFF, 100). 00:07:47.858 INFO: Seed: 102005089 00:07:47.858 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:47.858 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:47.858 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:47.858 INFO: A corpus is not provided, starting from an empty corpus 00:07:47.858 #2 INITED exec/s: 0 rss: 61Mb 00:07:47.858 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:47.858 This may also happen if the target rejected all inputs we tried so far 00:07:48.116 NEW_FUNC[1/659]: 0x43f858 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:07:48.116 NEW_FUNC[2/659]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:48.116 #6 NEW cov: 11488 ft: 11489 corp: 2/14b lim: 20 exec/s: 0 rss: 68Mb L: 13/13 MS: 4 ChangeBit-InsertByte-ShuffleBytes-InsertRepeatedBytes- 00:07:48.116 #7 NEW cov: 11606 ft: 12384 corp: 3/23b lim: 20 exec/s: 0 rss: 68Mb L: 9/13 MS: 1 EraseBytes- 00:07:48.116 #8 NEW cov: 11612 ft: 12785 corp: 4/36b lim: 20 exec/s: 0 rss: 68Mb L: 13/13 MS: 1 ChangeBinInt- 00:07:48.374 #9 NEW cov: 11697 ft: 13020 corp: 5/50b lim: 20 exec/s: 0 rss: 68Mb L: 14/14 MS: 1 InsertByte- 00:07:48.374 #13 NEW cov: 11697 ft: 13471 corp: 6/54b lim: 20 exec/s: 0 rss: 68Mb L: 4/14 MS: 4 InsertByte-CopyPart-ShuffleBytes-InsertByte- 00:07:48.374 #14 NEW cov: 11697 ft: 13530 corp: 7/67b lim: 20 exec/s: 0 rss: 68Mb L: 13/14 MS: 1 ChangeByte- 00:07:48.374 #15 NEW cov: 11714 ft: 13769 corp: 8/86b lim: 20 exec/s: 0 rss: 68Mb L: 19/19 MS: 1 CrossOver- 00:07:48.374 NEW_FUNC[1/4]: 0x111e188 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3224 00:07:48.374 NEW_FUNC[2/4]: 0x111ed08 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3166 00:07:48.374 #16 NEW cov: 11798 ft: 13931 corp: 9/94b lim: 20 exec/s: 0 rss: 69Mb L: 8/19 MS: 1 CMP- DE: "\031\000\000\000"- 00:07:48.374 #18 NEW cov: 11798 ft: 13960 corp: 10/99b lim: 20 exec/s: 0 rss: 69Mb L: 5/19 MS: 2 ShuffleBytes-PersAutoDict- DE: "\031\000\000\000"- 00:07:48.632 #19 NEW cov: 11798 ft: 14011 corp: 11/112b lim: 20 exec/s: 0 rss: 69Mb L: 13/19 MS: 1 ChangeByte- 00:07:48.632 #20 NEW cov: 11798 ft: 14089 corp: 12/131b lim: 20 exec/s: 0 rss: 69Mb L: 19/19 MS: 1 ShuffleBytes- 00:07:48.632 #21 NEW cov: 11798 ft: 14129 corp: 13/137b lim: 20 exec/s: 0 rss: 69Mb L: 6/19 MS: 1 InsertByte- 00:07:48.632 #22 NEW cov: 11798 ft: 14144 corp: 14/145b lim: 20 exec/s: 0 rss: 69Mb L: 8/19 MS: 1 PersAutoDict- DE: "\031\000\000\000"- 00:07:48.632 #24 NEW cov: 11798 ft: 14174 corp: 15/150b lim: 20 exec/s: 0 rss: 69Mb L: 5/19 MS: 2 ShuffleBytes-CrossOver- 00:07:48.632 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:48.632 #25 NEW cov: 11821 ft: 14268 corp: 16/166b lim: 20 exec/s: 0 rss: 69Mb L: 16/19 MS: 1 EraseBytes- 00:07:48.890 #26 NEW cov: 11821 ft: 14297 corp: 17/181b lim: 20 exec/s: 0 rss: 69Mb L: 15/19 MS: 1 CrossOver- 00:07:48.890 #27 NEW cov: 11821 ft: 14311 corp: 18/186b lim: 20 exec/s: 27 rss: 69Mb L: 5/19 MS: 1 ChangeBit- 00:07:48.890 #28 NEW cov: 11821 ft: 14388 corp: 19/205b lim: 20 exec/s: 28 rss: 69Mb L: 19/19 MS: 1 ShuffleBytes- 00:07:48.890 #29 NEW cov: 11821 ft: 14464 corp: 20/220b lim: 20 exec/s: 29 rss: 69Mb L: 15/19 MS: 1 ChangeByte- 00:07:48.890 #30 NEW cov: 11821 ft: 14489 corp: 21/232b lim: 20 exec/s: 30 rss: 69Mb L: 12/19 MS: 1 EraseBytes- 00:07:48.890 #36 NEW cov: 11821 ft: 14527 corp: 22/236b lim: 20 exec/s: 36 rss: 69Mb L: 4/19 MS: 1 ChangeBit- 00:07:48.890 #37 NEW cov: 11821 ft: 14536 corp: 23/247b lim: 20 exec/s: 37 rss: 69Mb L: 11/19 MS: 1 EraseBytes- 00:07:49.147 #43 NEW cov: 11821 ft: 14614 corp: 24/256b lim: 20 exec/s: 43 rss: 69Mb L: 9/19 MS: 1 InsertByte- 00:07:49.147 #44 NEW cov: 11821 ft: 14666 corp: 25/269b lim: 20 exec/s: 44 rss: 69Mb L: 13/19 MS: 1 ChangeBinInt- 00:07:49.147 #49 NEW cov: 11821 ft: 14682 corp: 26/276b lim: 20 exec/s: 49 rss: 69Mb L: 7/19 MS: 5 EraseBytes-ShuffleBytes-CrossOver-ChangeByte-PersAutoDict- DE: "\031\000\000\000"- 00:07:49.147 #50 NEW cov: 11821 ft: 14729 corp: 27/289b lim: 20 exec/s: 50 rss: 69Mb L: 13/19 MS: 1 ChangeBinInt- 00:07:49.147 #51 NEW cov: 11821 ft: 14743 corp: 28/304b lim: 20 exec/s: 51 rss: 69Mb L: 15/19 MS: 1 CopyPart- 00:07:49.147 #52 NEW cov: 11821 ft: 14754 corp: 29/312b lim: 20 exec/s: 52 rss: 69Mb L: 8/19 MS: 1 ChangeBinInt- 00:07:49.405 #53 NEW cov: 11821 ft: 14774 corp: 30/323b lim: 20 exec/s: 53 rss: 69Mb L: 11/19 MS: 1 ChangeByte- 00:07:49.405 #54 NEW cov: 11821 ft: 14776 corp: 31/331b lim: 20 exec/s: 54 rss: 69Mb L: 8/19 MS: 1 ChangeByte- 00:07:49.405 #55 NEW cov: 11821 ft: 14804 corp: 32/338b lim: 20 exec/s: 55 rss: 69Mb L: 7/19 MS: 1 EraseBytes- 00:07:49.405 #56 NEW cov: 11821 ft: 14819 corp: 33/343b lim: 20 exec/s: 56 rss: 69Mb L: 5/19 MS: 1 CopyPart- 00:07:49.405 #57 NEW cov: 11821 ft: 14825 corp: 34/356b lim: 20 exec/s: 57 rss: 69Mb L: 13/19 MS: 1 ChangeBit- 00:07:49.405 #58 NEW cov: 11821 ft: 14830 corp: 35/372b lim: 20 exec/s: 58 rss: 69Mb L: 16/19 MS: 1 CopyPart- 00:07:49.662 #59 NEW cov: 11821 ft: 14845 corp: 36/385b lim: 20 exec/s: 59 rss: 69Mb L: 13/19 MS: 1 ShuffleBytes- 00:07:49.662 #60 NEW cov: 11821 ft: 14852 corp: 37/389b lim: 20 exec/s: 60 rss: 70Mb L: 4/19 MS: 1 EraseBytes- 00:07:49.662 #61 NEW cov: 11821 ft: 14871 corp: 38/404b lim: 20 exec/s: 61 rss: 70Mb L: 15/19 MS: 1 ShuffleBytes- 00:07:49.662 #62 NEW cov: 11821 ft: 14906 corp: 39/423b lim: 20 exec/s: 62 rss: 70Mb L: 19/19 MS: 1 CMP- DE: "\377 \340\357\\R\003\030"- 00:07:49.662 #63 NEW cov: 11821 ft: 14965 corp: 40/438b lim: 20 exec/s: 63 rss: 70Mb L: 15/19 MS: 1 ChangeBit- 00:07:49.662 #64 NEW cov: 11821 ft: 14981 corp: 41/452b lim: 20 exec/s: 64 rss: 70Mb L: 14/19 MS: 1 ChangeByte- 00:07:49.662 [2024-10-01 14:27:32.139563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:49.662 [2024-10-01 14:27:32.139612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.662 NEW_FUNC[1/15]: 0x1537108 in nvme_ctrlr_process_async_event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:3091 00:07:49.662 NEW_FUNC[2/15]: 0x15cd498 in nvme_ctrlr_queue_async_event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:3131 00:07:49.663 #65 NEW cov: 12038 ft: 15268 corp: 42/468b lim: 20 exec/s: 65 rss: 70Mb L: 16/19 MS: 1 CMP- DE: "z\000\000\000\000\000\000\000"- 00:07:49.921 #66 NEW cov: 12038 ft: 15298 corp: 43/485b lim: 20 exec/s: 33 rss: 70Mb L: 17/19 MS: 1 InsertByte- 00:07:49.921 #66 DONE cov: 12038 ft: 15298 corp: 43/485b lim: 20 exec/s: 33 rss: 70Mb 00:07:49.921 ###### Recommended dictionary. ###### 00:07:49.921 "\031\000\000\000" # Uses: 3 00:07:49.921 "\377 \340\357\\R\003\030" # Uses: 0 00:07:49.921 "z\000\000\000\000\000\000\000" # Uses: 0 00:07:49.921 ###### End of recommended dictionary. ###### 00:07:49.921 Done 66 runs in 2 second(s) 00:07:49.921 14:27:32 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_3.conf 00:07:49.921 14:27:32 -- ../common.sh@72 -- # (( i++ )) 00:07:49.921 14:27:32 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:49.921 14:27:32 -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:49.921 14:27:32 -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:07:49.921 14:27:32 -- nvmf/run.sh@24 -- # local timen=1 00:07:49.921 14:27:32 -- nvmf/run.sh@25 -- # local core=0x1 00:07:49.921 14:27:32 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:49.921 14:27:32 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:07:49.921 14:27:32 -- nvmf/run.sh@29 -- # printf %02d 4 00:07:49.921 14:27:32 -- nvmf/run.sh@29 -- # port=4404 00:07:49.921 14:27:32 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:49.921 14:27:32 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:07:49.921 14:27:32 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:49.921 14:27:32 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 -r /var/tmp/spdk4.sock 00:07:49.921 [2024-10-01 14:27:32.408550] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:49.921 [2024-10-01 14:27:32.408629] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid702031 ] 00:07:49.921 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.487 [2024-10-01 14:27:32.721047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.487 [2024-10-01 14:27:32.806921] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:50.487 [2024-10-01 14:27:32.807047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.487 [2024-10-01 14:27:32.865357] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.487 [2024-10-01 14:27:32.881569] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:07:50.487 INFO: Running with entropic power schedule (0xFF, 100). 00:07:50.487 INFO: Seed: 2834988036 00:07:50.487 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:50.487 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:50.487 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:50.487 INFO: A corpus is not provided, starting from an empty corpus 00:07:50.487 #2 INITED exec/s: 0 rss: 61Mb 00:07:50.487 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:50.487 This may also happen if the target rejected all inputs we tried so far 00:07:50.487 [2024-10-01 14:27:32.950147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.487 [2024-10-01 14:27:32.950194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.487 [2024-10-01 14:27:32.950337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.487 [2024-10-01 14:27:32.950360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.487 [2024-10-01 14:27:32.950496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.487 [2024-10-01 14:27:32.950515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.487 [2024-10-01 14:27:32.950644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.487 [2024-10-01 14:27:32.950666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.745 NEW_FUNC[1/667]: 0x440958 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:07:50.745 NEW_FUNC[2/667]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:50.745 #4 NEW cov: 11548 ft: 11546 corp: 2/29b lim: 35 exec/s: 0 rss: 68Mb L: 28/28 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:51.004 [2024-10-01 14:27:33.300242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.004 [2024-10-01 14:27:33.300287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.004 [2024-10-01 14:27:33.300387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.004 [2024-10-01 14:27:33.300405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.004 [2024-10-01 14:27:33.300490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.004 [2024-10-01 14:27:33.300505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.004 [2024-10-01 14:27:33.300588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.004 [2024-10-01 14:27:33.300605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.004 NEW_FUNC[1/4]: 0xf4b568 in posix_sock_group_impl_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/sock/posix/posix.c:1965 00:07:51.004 NEW_FUNC[2/4]: 0x1999018 in spdk_sock_group_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/sock/sock.c:702 00:07:51.004 #5 NEW cov: 11714 ft: 12155 corp: 3/57b lim: 35 exec/s: 0 rss: 68Mb L: 28/28 MS: 1 ChangeByte- 00:07:51.004 [2024-10-01 14:27:33.360808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.004 [2024-10-01 14:27:33.360841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.004 [2024-10-01 14:27:33.360929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.004 [2024-10-01 14:27:33.360946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.004 [2024-10-01 14:27:33.361038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.004 [2024-10-01 14:27:33.361052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.004 [2024-10-01 14:27:33.361136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff690003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.004 [2024-10-01 14:27:33.361151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.004 [2024-10-01 14:27:33.361236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.004 [2024-10-01 14:27:33.361252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:51.004 #6 NEW cov: 11720 ft: 12522 corp: 4/92b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 CrossOver- 00:07:51.004 [2024-10-01 14:27:33.420534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.004 [2024-10-01 14:27:33.420565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.004 [2024-10-01 14:27:33.420647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.004 [2024-10-01 14:27:33.420663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.004 [2024-10-01 14:27:33.420765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.004 [2024-10-01 14:27:33.420780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.004 [2024-10-01 14:27:33.420863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.004 [2024-10-01 14:27:33.420878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.004 #7 NEW cov: 11805 ft: 12801 corp: 5/120b lim: 35 exec/s: 0 rss: 69Mb L: 28/35 MS: 1 ShuffleBytes- 00:07:51.004 [2024-10-01 14:27:33.470784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.004 [2024-10-01 14:27:33.470810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.004 [2024-10-01 14:27:33.470889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.004 [2024-10-01 14:27:33.470904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.004 [2024-10-01 14:27:33.470986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.004 [2024-10-01 14:27:33.471000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.004 [2024-10-01 14:27:33.471090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.004 [2024-10-01 14:27:33.471106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.004 #8 NEW cov: 11805 ft: 12845 corp: 6/148b lim: 35 exec/s: 0 rss: 69Mb L: 28/35 MS: 1 ShuffleBytes- 00:07:51.263 [2024-10-01 14:27:33.531092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.263 [2024-10-01 14:27:33.531118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.263 [2024-10-01 14:27:33.531193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.263 [2024-10-01 14:27:33.531209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.263 [2024-10-01 14:27:33.531294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.263 [2024-10-01 14:27:33.531308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.263 [2024-10-01 14:27:33.531395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fffffffd cdw11:ff690003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.263 [2024-10-01 14:27:33.531410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.263 [2024-10-01 14:27:33.531500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.263 [2024-10-01 14:27:33.531514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:51.263 #9 NEW cov: 11805 ft: 12999 corp: 7/183b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 ChangeBit- 00:07:51.263 [2024-10-01 14:27:33.591487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.263 [2024-10-01 14:27:33.591513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.263 [2024-10-01 14:27:33.591601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.263 [2024-10-01 14:27:33.591616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.263 [2024-10-01 14:27:33.591716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.263 [2024-10-01 14:27:33.591735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.263 [2024-10-01 14:27:33.591823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2300fffd cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.263 [2024-10-01 14:27:33.591838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.263 [2024-10-01 14:27:33.591926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00ff0000 cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.263 [2024-10-01 14:27:33.591941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:51.263 #10 NEW cov: 11805 ft: 13071 corp: 8/218b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:51.263 [2024-10-01 14:27:33.651535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.263 [2024-10-01 14:27:33.651560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.263 [2024-10-01 14:27:33.651646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.263 [2024-10-01 14:27:33.651668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.263 [2024-10-01 14:27:33.651762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.263 [2024-10-01 14:27:33.651777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.263 [2024-10-01 14:27:33.651867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff690003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.263 [2024-10-01 14:27:33.651882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.263 [2024-10-01 14:27:33.651966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.263 [2024-10-01 14:27:33.651981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:51.263 #11 NEW cov: 11805 ft: 13151 corp: 9/253b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 ChangeByte- 00:07:51.263 [2024-10-01 14:27:33.700822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fffff6ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.263 [2024-10-01 14:27:33.700848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.263 [2024-10-01 14:27:33.700930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:5bffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.263 [2024-10-01 14:27:33.700944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.263 #21 NEW cov: 11805 ft: 13503 corp: 10/269b lim: 35 exec/s: 0 rss: 69Mb L: 16/35 MS: 5 InsertByte-CrossOver-ChangeBinInt-ChangeByte-CrossOver- 00:07:51.263 [2024-10-01 14:27:33.751634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.263 [2024-10-01 14:27:33.751659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.263 [2024-10-01 14:27:33.751741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffff90ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.263 [2024-10-01 14:27:33.751767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.263 [2024-10-01 14:27:33.751854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.263 [2024-10-01 14:27:33.751870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.263 [2024-10-01 14:27:33.751953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.263 [2024-10-01 14:27:33.751968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.263 #22 NEW cov: 11805 ft: 13573 corp: 11/297b lim: 35 exec/s: 0 rss: 69Mb L: 28/35 MS: 1 ChangeByte- 00:07:51.522 [2024-10-01 14:27:33.801876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.522 [2024-10-01 14:27:33.801904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.522 [2024-10-01 14:27:33.802002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.522 [2024-10-01 14:27:33.802021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.522 [2024-10-01 14:27:33.802108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.522 [2024-10-01 14:27:33.802124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.522 [2024-10-01 14:27:33.802202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffff25 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.522 [2024-10-01 14:27:33.802217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.522 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:51.522 #23 NEW cov: 11828 ft: 13627 corp: 12/326b lim: 35 exec/s: 0 rss: 69Mb L: 29/35 MS: 1 InsertByte- 00:07:51.522 [2024-10-01 14:27:33.852009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:85858585 cdw11:85850001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.522 [2024-10-01 14:27:33.852037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.522 [2024-10-01 14:27:33.852135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85850001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.522 [2024-10-01 14:27:33.852151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.522 [2024-10-01 14:27:33.852234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:85850001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.522 [2024-10-01 14:27:33.852250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.522 [2024-10-01 14:27:33.852343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:85858585 cdw11:85850001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.522 [2024-10-01 14:27:33.852359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.522 #24 NEW cov: 11828 ft: 13643 corp: 13/358b lim: 35 exec/s: 0 rss: 69Mb L: 32/35 MS: 1 InsertRepeatedBytes- 00:07:51.522 [2024-10-01 14:27:33.902511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.522 [2024-10-01 14:27:33.902538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.522 [2024-10-01 14:27:33.902627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.522 [2024-10-01 14:27:33.902644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.522 [2024-10-01 14:27:33.902732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.522 [2024-10-01 14:27:33.902746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.522 [2024-10-01 14:27:33.902831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.522 [2024-10-01 14:27:33.902846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.522 [2024-10-01 14:27:33.902933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.522 [2024-10-01 14:27:33.902952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:51.522 #25 NEW cov: 11828 ft: 13676 corp: 14/393b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 CrossOver- 00:07:51.522 [2024-10-01 14:27:33.952718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.522 [2024-10-01 14:27:33.952747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.522 [2024-10-01 14:27:33.952833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.522 [2024-10-01 14:27:33.952849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.522 [2024-10-01 14:27:33.952933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.522 [2024-10-01 14:27:33.952947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.522 [2024-10-01 14:27:33.953034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:bfffffff cdw11:ff690003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.522 [2024-10-01 14:27:33.953049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.522 [2024-10-01 14:27:33.953139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.522 [2024-10-01 14:27:33.953154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:51.522 #26 NEW cov: 11828 ft: 13703 corp: 15/428b lim: 35 exec/s: 26 rss: 69Mb L: 35/35 MS: 1 ChangeBit- 00:07:51.522 [2024-10-01 14:27:34.012517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:85858585 cdw11:85850001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.522 [2024-10-01 14:27:34.012543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.522 [2024-10-01 14:27:34.012630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85850001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.522 [2024-10-01 14:27:34.012646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.522 [2024-10-01 14:27:34.012727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:20000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.522 [2024-10-01 14:27:34.012741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.523 [2024-10-01 14:27:34.012838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:85850085 cdw11:85850001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.523 [2024-10-01 14:27:34.012854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.523 #27 NEW cov: 11828 ft: 13720 corp: 16/460b lim: 35 exec/s: 27 rss: 69Mb L: 32/35 MS: 1 ChangeBinInt- 00:07:51.781 [2024-10-01 14:27:34.073156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.781 [2024-10-01 14:27:34.073183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.782 [2024-10-01 14:27:34.073271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.782 [2024-10-01 14:27:34.073289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.782 [2024-10-01 14:27:34.073374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.782 [2024-10-01 14:27:34.073389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.782 [2024-10-01 14:27:34.073478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff690003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.782 [2024-10-01 14:27:34.073492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.782 [2024-10-01 14:27:34.073571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.782 [2024-10-01 14:27:34.073587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:51.782 #28 NEW cov: 11828 ft: 13737 corp: 17/495b lim: 35 exec/s: 28 rss: 69Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:51.782 [2024-10-01 14:27:34.123049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.782 [2024-10-01 14:27:34.123073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.782 [2024-10-01 14:27:34.123155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.782 [2024-10-01 14:27:34.123170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.782 [2024-10-01 14:27:34.123263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.782 [2024-10-01 14:27:34.123279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.782 [2024-10-01 14:27:34.123376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.782 [2024-10-01 14:27:34.123395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.782 #29 NEW cov: 11828 ft: 13754 corp: 18/523b lim: 35 exec/s: 29 rss: 69Mb L: 28/35 MS: 1 ShuffleBytes- 00:07:51.782 [2024-10-01 14:27:34.173253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.782 [2024-10-01 14:27:34.173278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.782 [2024-10-01 14:27:34.173362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fffff6ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.782 [2024-10-01 14:27:34.173377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.782 [2024-10-01 14:27:34.173460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:5bffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.782 [2024-10-01 14:27:34.173473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.782 [2024-10-01 14:27:34.173557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.782 [2024-10-01 14:27:34.173575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.782 #30 NEW cov: 11828 ft: 13778 corp: 19/554b lim: 35 exec/s: 30 rss: 69Mb L: 31/35 MS: 1 CrossOver- 00:07:51.782 [2024-10-01 14:27:34.223386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.782 [2024-10-01 14:27:34.223411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.782 [2024-10-01 14:27:34.223495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.782 [2024-10-01 14:27:34.223511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.782 [2024-10-01 14:27:34.223591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.782 [2024-10-01 14:27:34.223607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.782 [2024-10-01 14:27:34.223693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.782 [2024-10-01 14:27:34.223710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.782 #31 NEW cov: 11828 ft: 13870 corp: 20/582b lim: 35 exec/s: 31 rss: 69Mb L: 28/35 MS: 1 ChangeBit- 00:07:51.782 [2024-10-01 14:27:34.272945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fffff6ff cdw11:87870001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.782 [2024-10-01 14:27:34.272970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.782 [2024-10-01 14:27:34.273060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff5b0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.782 [2024-10-01 14:27:34.273076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.782 #32 NEW cov: 11828 ft: 13876 corp: 21/601b lim: 35 exec/s: 32 rss: 69Mb L: 19/35 MS: 1 InsertRepeatedBytes- 00:07:52.041 [2024-10-01 14:27:34.334143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.041 [2024-10-01 14:27:34.334169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.041 [2024-10-01 14:27:34.334257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.041 [2024-10-01 14:27:34.334274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.041 [2024-10-01 14:27:34.334356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.041 [2024-10-01 14:27:34.334371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.041 [2024-10-01 14:27:34.334453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:bfffffff cdw11:ff3b0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.041 [2024-10-01 14:27:34.334468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.041 [2024-10-01 14:27:34.334545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.041 [2024-10-01 14:27:34.334564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:52.041 #33 NEW cov: 11828 ft: 13894 corp: 22/636b lim: 35 exec/s: 33 rss: 69Mb L: 35/35 MS: 1 ChangeByte- 00:07:52.041 [2024-10-01 14:27:34.394045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.041 [2024-10-01 14:27:34.394072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.041 [2024-10-01 14:27:34.394155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fffff6ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.041 [2024-10-01 14:27:34.394171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.041 [2024-10-01 14:27:34.394260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:5bffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.041 [2024-10-01 14:27:34.394276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.041 [2024-10-01 14:27:34.394364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffff3dff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.041 [2024-10-01 14:27:34.394385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.041 #34 NEW cov: 11828 ft: 13938 corp: 23/668b lim: 35 exec/s: 34 rss: 69Mb L: 32/35 MS: 1 InsertByte- 00:07:52.041 [2024-10-01 14:27:34.454312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.041 [2024-10-01 14:27:34.454339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.041 [2024-10-01 14:27:34.454441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.041 [2024-10-01 14:27:34.454458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.041 [2024-10-01 14:27:34.454541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.041 [2024-10-01 14:27:34.454556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.041 [2024-10-01 14:27:34.454640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff250003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.041 [2024-10-01 14:27:34.454656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.041 #35 NEW cov: 11828 ft: 13948 corp: 24/697b lim: 35 exec/s: 35 rss: 69Mb L: 29/35 MS: 1 CopyPart- 00:07:52.041 [2024-10-01 14:27:34.514872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.041 [2024-10-01 14:27:34.514902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.041 [2024-10-01 14:27:34.515001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.041 [2024-10-01 14:27:34.515017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.041 [2024-10-01 14:27:34.515096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffff0005 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.041 [2024-10-01 14:27:34.515115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.041 [2024-10-01 14:27:34.515197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.041 [2024-10-01 14:27:34.515213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.041 [2024-10-01 14:27:34.515297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.041 [2024-10-01 14:27:34.515313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:52.041 #36 NEW cov: 11828 ft: 13957 corp: 25/732b lim: 35 exec/s: 36 rss: 69Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:52.300 [2024-10-01 14:27:34.574244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.300 [2024-10-01 14:27:34.574273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.300 [2024-10-01 14:27:34.574372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.300 [2024-10-01 14:27:34.574387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.300 [2024-10-01 14:27:34.574490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.300 [2024-10-01 14:27:34.574505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.300 #37 NEW cov: 11828 ft: 14161 corp: 26/758b lim: 35 exec/s: 37 rss: 69Mb L: 26/35 MS: 1 InsertRepeatedBytes- 00:07:52.300 [2024-10-01 14:27:34.624104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fffff6ff cdw11:87870001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.300 [2024-10-01 14:27:34.624132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.300 [2024-10-01 14:27:34.624222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffff2dff cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.300 [2024-10-01 14:27:34.624237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.300 #38 NEW cov: 11828 ft: 14172 corp: 27/778b lim: 35 exec/s: 38 rss: 69Mb L: 20/35 MS: 1 InsertByte- 00:07:52.300 [2024-10-01 14:27:34.685073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.300 [2024-10-01 14:27:34.685099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.300 [2024-10-01 14:27:34.685179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fffff6ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.300 [2024-10-01 14:27:34.685195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.300 [2024-10-01 14:27:34.685290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:5affffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.300 [2024-10-01 14:27:34.685305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.300 [2024-10-01 14:27:34.685389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.300 [2024-10-01 14:27:34.685408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.300 #39 NEW cov: 11828 ft: 14215 corp: 28/809b lim: 35 exec/s: 39 rss: 69Mb L: 31/35 MS: 1 ChangeBit- 00:07:52.300 [2024-10-01 14:27:34.735614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.300 [2024-10-01 14:27:34.735639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.300 [2024-10-01 14:27:34.735733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.300 [2024-10-01 14:27:34.735749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.300 [2024-10-01 14:27:34.735833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:69ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.300 [2024-10-01 14:27:34.735848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.300 [2024-10-01 14:27:34.735932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.300 [2024-10-01 14:27:34.735948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.300 [2024-10-01 14:27:34.736034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ff69bfff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.300 [2024-10-01 14:27:34.736048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:52.300 #40 NEW cov: 11828 ft: 14232 corp: 29/844b lim: 35 exec/s: 40 rss: 69Mb L: 35/35 MS: 1 CopyPart- 00:07:52.301 [2024-10-01 14:27:34.785803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:09000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.301 [2024-10-01 14:27:34.785829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.301 [2024-10-01 14:27:34.785911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.301 [2024-10-01 14:27:34.785926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.301 [2024-10-01 14:27:34.786013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.301 [2024-10-01 14:27:34.786026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.301 [2024-10-01 14:27:34.786104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:bfffffff cdw11:ff690003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.301 [2024-10-01 14:27:34.786119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.301 [2024-10-01 14:27:34.786199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.301 [2024-10-01 14:27:34.786216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:52.301 #41 NEW cov: 11828 ft: 14242 corp: 30/879b lim: 35 exec/s: 41 rss: 69Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:52.559 [2024-10-01 14:27:34.835023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fffff6ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.559 [2024-10-01 14:27:34.835054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.559 [2024-10-01 14:27:34.835154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:5b7fffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.559 [2024-10-01 14:27:34.835170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.559 #42 NEW cov: 11828 ft: 14249 corp: 31/895b lim: 35 exec/s: 42 rss: 70Mb L: 16/35 MS: 1 ChangeBit- 00:07:52.559 [2024-10-01 14:27:34.885586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.559 [2024-10-01 14:27:34.885613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.559 [2024-10-01 14:27:34.885713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff5b0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.559 [2024-10-01 14:27:34.885734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.559 [2024-10-01 14:27:34.885823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.559 [2024-10-01 14:27:34.885840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.559 #43 NEW cov: 11828 ft: 14271 corp: 32/922b lim: 35 exec/s: 43 rss: 70Mb L: 27/35 MS: 1 EraseBytes- 00:07:52.559 [2024-10-01 14:27:34.935740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.559 [2024-10-01 14:27:34.935766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.559 [2024-10-01 14:27:34.935860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.559 [2024-10-01 14:27:34.935876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.559 [2024-10-01 14:27:34.935962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffff69 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.559 [2024-10-01 14:27:34.935978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.559 #44 NEW cov: 11828 ft: 14306 corp: 33/946b lim: 35 exec/s: 22 rss: 70Mb L: 24/35 MS: 1 EraseBytes- 00:07:52.559 #44 DONE cov: 11828 ft: 14306 corp: 33/946b lim: 35 exec/s: 22 rss: 70Mb 00:07:52.559 Done 44 runs in 2 second(s) 00:07:52.559 14:27:35 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_4.conf 00:07:52.818 14:27:35 -- ../common.sh@72 -- # (( i++ )) 00:07:52.818 14:27:35 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:52.818 14:27:35 -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:52.818 14:27:35 -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:07:52.818 14:27:35 -- nvmf/run.sh@24 -- # local timen=1 00:07:52.818 14:27:35 -- nvmf/run.sh@25 -- # local core=0x1 00:07:52.818 14:27:35 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:52.818 14:27:35 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:07:52.818 14:27:35 -- nvmf/run.sh@29 -- # printf %02d 5 00:07:52.818 14:27:35 -- nvmf/run.sh@29 -- # port=4405 00:07:52.818 14:27:35 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:52.818 14:27:35 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:07:52.818 14:27:35 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:52.818 14:27:35 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 -r /var/tmp/spdk5.sock 00:07:52.818 [2024-10-01 14:27:35.130737] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:52.818 [2024-10-01 14:27:35.130806] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid702401 ] 00:07:52.818 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.076 [2024-10-01 14:27:35.444391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.076 [2024-10-01 14:27:35.537760] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:53.076 [2024-10-01 14:27:35.537882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.076 [2024-10-01 14:27:35.596374] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.334 [2024-10-01 14:27:35.612575] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:07:53.334 INFO: Running with entropic power schedule (0xFF, 100). 00:07:53.334 INFO: Seed: 1270024482 00:07:53.334 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:53.334 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:53.334 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:53.334 INFO: A corpus is not provided, starting from an empty corpus 00:07:53.334 #2 INITED exec/s: 0 rss: 61Mb 00:07:53.334 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:53.334 This may also happen if the target rejected all inputs we tried so far 00:07:53.334 [2024-10-01 14:27:35.667961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:47470a47 cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.334 [2024-10-01 14:27:35.667991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.591 NEW_FUNC[1/671]: 0x442af8 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:07:53.591 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:53.591 #4 NEW cov: 11612 ft: 11613 corp: 2/16b lim: 45 exec/s: 0 rss: 68Mb L: 15/15 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:53.591 [2024-10-01 14:27:36.010383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:47470a47 cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.591 [2024-10-01 14:27:36.010424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.591 #10 NEW cov: 11725 ft: 12151 corp: 3/31b lim: 45 exec/s: 0 rss: 68Mb L: 15/15 MS: 1 ChangeBinInt- 00:07:53.592 [2024-10-01 14:27:36.070619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5b470a47 cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.592 [2024-10-01 14:27:36.070646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.592 #11 NEW cov: 11731 ft: 12317 corp: 4/47b lim: 45 exec/s: 0 rss: 68Mb L: 16/16 MS: 1 InsertByte- 00:07:53.849 [2024-10-01 14:27:36.120921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:47470a47 cdw11:3f470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.850 [2024-10-01 14:27:36.120948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.850 #12 NEW cov: 11816 ft: 12618 corp: 5/63b lim: 45 exec/s: 0 rss: 68Mb L: 16/16 MS: 1 InsertByte- 00:07:53.850 [2024-10-01 14:27:36.181778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5b470a47 cdw11:5b470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.850 [2024-10-01 14:27:36.181804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.850 [2024-10-01 14:27:36.181900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:47474747 cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.850 [2024-10-01 14:27:36.181916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.850 #13 NEW cov: 11816 ft: 13449 corp: 6/81b lim: 45 exec/s: 0 rss: 68Mb L: 18/18 MS: 1 CopyPart- 00:07:53.850 [2024-10-01 14:27:36.241537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:4747c90a cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.850 [2024-10-01 14:27:36.241562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.850 #19 NEW cov: 11816 ft: 13548 corp: 7/97b lim: 45 exec/s: 0 rss: 68Mb L: 16/18 MS: 1 InsertByte- 00:07:53.850 [2024-10-01 14:27:36.291724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:47470a47 cdw11:3f470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.850 [2024-10-01 14:27:36.291750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.850 #20 NEW cov: 11816 ft: 13630 corp: 8/113b lim: 45 exec/s: 0 rss: 69Mb L: 16/18 MS: 1 CrossOver- 00:07:53.850 [2024-10-01 14:27:36.351999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5b470ac7 cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.850 [2024-10-01 14:27:36.352024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.108 #21 NEW cov: 11816 ft: 13708 corp: 9/129b lim: 45 exec/s: 0 rss: 69Mb L: 16/18 MS: 1 ChangeBit- 00:07:54.108 [2024-10-01 14:27:36.403087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5b470a47 cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.108 [2024-10-01 14:27:36.403114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.108 [2024-10-01 14:27:36.403207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:47475b47 cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.108 [2024-10-01 14:27:36.403222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.108 [2024-10-01 14:27:36.403312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:47474747 cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.108 [2024-10-01 14:27:36.403328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.108 #22 NEW cov: 11816 ft: 14068 corp: 10/161b lim: 45 exec/s: 0 rss: 69Mb L: 32/32 MS: 1 CrossOver- 00:07:54.108 [2024-10-01 14:27:36.452697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:47470a47 cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.108 [2024-10-01 14:27:36.452726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.108 #23 NEW cov: 11816 ft: 14159 corp: 11/176b lim: 45 exec/s: 0 rss: 69Mb L: 15/32 MS: 1 ShuffleBytes- 00:07:54.108 [2024-10-01 14:27:36.503180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:4747c90a cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.108 [2024-10-01 14:27:36.503207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.108 #29 NEW cov: 11816 ft: 14184 corp: 12/192b lim: 45 exec/s: 0 rss: 69Mb L: 16/32 MS: 1 ChangeBinInt- 00:07:54.108 [2024-10-01 14:27:36.554109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a470100 cdw11:5b470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.108 [2024-10-01 14:27:36.554135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.108 [2024-10-01 14:27:36.554232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5b470ac7 cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.108 [2024-10-01 14:27:36.554247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.108 [2024-10-01 14:27:36.554334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:47474747 cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.108 [2024-10-01 14:27:36.554348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.108 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:54.108 #30 NEW cov: 11839 ft: 14308 corp: 13/226b lim: 45 exec/s: 0 rss: 69Mb L: 34/34 MS: 1 CMP- DE: "\001\000"- 00:07:54.108 [2024-10-01 14:27:36.614360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a470100 cdw11:475b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.108 [2024-10-01 14:27:36.614385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.108 [2024-10-01 14:27:36.614471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5b470ac7 cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.108 [2024-10-01 14:27:36.614486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.109 [2024-10-01 14:27:36.614572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:47474747 cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.109 [2024-10-01 14:27:36.614587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.368 #31 NEW cov: 11839 ft: 14340 corp: 14/260b lim: 45 exec/s: 0 rss: 69Mb L: 34/34 MS: 1 ShuffleBytes- 00:07:54.368 [2024-10-01 14:27:36.673807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:2947c90a cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.368 [2024-10-01 14:27:36.673832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.368 #32 NEW cov: 11839 ft: 14357 corp: 15/276b lim: 45 exec/s: 32 rss: 69Mb L: 16/34 MS: 1 ChangeByte- 00:07:54.368 [2024-10-01 14:27:36.725122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5b470a47 cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.368 [2024-10-01 14:27:36.725148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.368 [2024-10-01 14:27:36.725236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:47474747 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.368 [2024-10-01 14:27:36.725252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.368 [2024-10-01 14:27:36.725338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.368 [2024-10-01 14:27:36.725353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.368 [2024-10-01 14:27:36.725439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.368 [2024-10-01 14:27:36.725458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.368 #33 NEW cov: 11839 ft: 14696 corp: 16/318b lim: 45 exec/s: 33 rss: 69Mb L: 42/42 MS: 1 InsertRepeatedBytes- 00:07:54.368 [2024-10-01 14:27:36.774901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a470100 cdw11:01000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.368 [2024-10-01 14:27:36.774926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.368 [2024-10-01 14:27:36.775013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:47474747 cdw11:0ac70002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.368 [2024-10-01 14:27:36.775031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.368 [2024-10-01 14:27:36.775110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:47474747 cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.368 [2024-10-01 14:27:36.775125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.368 #34 NEW cov: 11839 ft: 14712 corp: 17/352b lim: 45 exec/s: 34 rss: 69Mb L: 34/42 MS: 1 CopyPart- 00:07:54.368 [2024-10-01 14:27:36.825526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5b470a47 cdw11:86860004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.368 [2024-10-01 14:27:36.825551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.368 [2024-10-01 14:27:36.825639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:86868686 cdw11:86860004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.368 [2024-10-01 14:27:36.825655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.368 [2024-10-01 14:27:36.825741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:86868686 cdw11:86860004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.368 [2024-10-01 14:27:36.825756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.368 [2024-10-01 14:27:36.825841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:47474747 cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.368 [2024-10-01 14:27:36.825858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.368 #35 NEW cov: 11839 ft: 14729 corp: 18/390b lim: 45 exec/s: 35 rss: 69Mb L: 38/42 MS: 1 InsertRepeatedBytes- 00:07:54.368 [2024-10-01 14:27:36.874605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:4747c90a cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.368 [2024-10-01 14:27:36.874631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.628 #36 NEW cov: 11839 ft: 14793 corp: 19/406b lim: 45 exec/s: 36 rss: 69Mb L: 16/42 MS: 1 PersAutoDict- DE: "\001\000"- 00:07:54.628 [2024-10-01 14:27:36.925181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a470100 cdw11:5b470000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.628 [2024-10-01 14:27:36.925210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.628 [2024-10-01 14:27:36.925307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:47865b47 cdw11:86860004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.628 [2024-10-01 14:27:36.925323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.628 #37 NEW cov: 11839 ft: 14801 corp: 20/428b lim: 45 exec/s: 37 rss: 69Mb L: 22/42 MS: 1 CrossOver- 00:07:54.628 [2024-10-01 14:27:36.986254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.628 [2024-10-01 14:27:36.986284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.628 [2024-10-01 14:27:36.986373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.628 [2024-10-01 14:27:36.986390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.628 [2024-10-01 14:27:36.986486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.628 [2024-10-01 14:27:36.986502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.628 [2024-10-01 14:27:36.986589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.628 [2024-10-01 14:27:36.986605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.628 #39 NEW cov: 11839 ft: 14827 corp: 21/464b lim: 45 exec/s: 39 rss: 69Mb L: 36/42 MS: 2 PersAutoDict-InsertRepeatedBytes- DE: "\001\000"- 00:07:54.628 [2024-10-01 14:27:37.045308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:47470a47 cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.628 [2024-10-01 14:27:37.045337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.628 #40 NEW cov: 11839 ft: 14844 corp: 22/479b lim: 45 exec/s: 40 rss: 69Mb L: 15/42 MS: 1 ChangeBinInt- 00:07:54.628 [2024-10-01 14:27:37.106807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a470100 cdw11:5b010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.628 [2024-10-01 14:27:37.106836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.628 [2024-10-01 14:27:37.106931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:47475b47 cdw11:470a0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.628 [2024-10-01 14:27:37.106948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.628 [2024-10-01 14:27:37.107029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:47474747 cdw11:0a470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.628 [2024-10-01 14:27:37.107044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.628 [2024-10-01 14:27:37.107151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:86864786 cdw11:86470000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.628 [2024-10-01 14:27:37.107166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.628 #41 NEW cov: 11839 ft: 14900 corp: 23/517b lim: 45 exec/s: 41 rss: 69Mb L: 38/42 MS: 1 CrossOver- 00:07:54.888 [2024-10-01 14:27:37.166846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a470100 cdw11:5b470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.888 [2024-10-01 14:27:37.166886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.888 [2024-10-01 14:27:37.166974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5b470ac7 cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.888 [2024-10-01 14:27:37.166992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.888 [2024-10-01 14:27:37.167082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:47474747 cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.888 [2024-10-01 14:27:37.167098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.888 [2024-10-01 14:27:37.167194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:47474747 cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.888 [2024-10-01 14:27:37.167210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.888 #42 NEW cov: 11839 ft: 14916 corp: 24/553b lim: 45 exec/s: 42 rss: 69Mb L: 36/42 MS: 1 CopyPart- 00:07:54.888 [2024-10-01 14:27:37.215903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5b470ac7 cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.888 [2024-10-01 14:27:37.215929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.888 #43 NEW cov: 11839 ft: 14945 corp: 25/569b lim: 45 exec/s: 43 rss: 69Mb L: 16/42 MS: 1 PersAutoDict- DE: "\001\000"- 00:07:54.888 [2024-10-01 14:27:37.265987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:47470a47 cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.888 [2024-10-01 14:27:37.266012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.888 #44 NEW cov: 11839 ft: 14983 corp: 26/584b lim: 45 exec/s: 44 rss: 69Mb L: 15/42 MS: 1 ChangeBit- 00:07:54.888 [2024-10-01 14:27:37.316378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a20 cdw11:00040001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.888 [2024-10-01 14:27:37.316403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.888 #47 NEW cov: 11839 ft: 15013 corp: 27/593b lim: 45 exec/s: 47 rss: 69Mb L: 9/42 MS: 3 InsertRepeatedBytes-ChangeBinInt-CMP- DE: "\000\000\000\004"- 00:07:54.888 [2024-10-01 14:27:37.367294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.888 [2024-10-01 14:27:37.367320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.888 [2024-10-01 14:27:37.367408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.888 [2024-10-01 14:27:37.367424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.888 #48 NEW cov: 11839 ft: 15032 corp: 28/616b lim: 45 exec/s: 48 rss: 69Mb L: 23/42 MS: 1 EraseBytes- 00:07:55.147 [2024-10-01 14:27:37.427399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:47470a47 cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.147 [2024-10-01 14:27:37.427426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.147 #49 NEW cov: 11839 ft: 15114 corp: 29/631b lim: 45 exec/s: 49 rss: 69Mb L: 15/42 MS: 1 ChangeBit- 00:07:55.147 [2024-10-01 14:27:37.478490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a470100 cdw11:5b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.147 [2024-10-01 14:27:37.478516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.147 [2024-10-01 14:27:37.478610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5b470ac7 cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.147 [2024-10-01 14:27:37.478626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.147 [2024-10-01 14:27:37.478709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:47474747 cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.147 [2024-10-01 14:27:37.478728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.147 [2024-10-01 14:27:37.478813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:47474747 cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.147 [2024-10-01 14:27:37.478828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.147 #50 NEW cov: 11839 ft: 15135 corp: 30/667b lim: 45 exec/s: 50 rss: 70Mb L: 36/42 MS: 1 ChangeBinInt- 00:07:55.147 [2024-10-01 14:27:37.539096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a470100 cdw11:5b010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.147 [2024-10-01 14:27:37.539122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.147 [2024-10-01 14:27:37.539221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:47475b47 cdw11:470a0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.147 [2024-10-01 14:27:37.539238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.147 [2024-10-01 14:27:37.539338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:47474747 cdw11:0a470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.147 [2024-10-01 14:27:37.539354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.147 [2024-10-01 14:27:37.539448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:86864786 cdw11:86470000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.147 [2024-10-01 14:27:37.539463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.147 #51 NEW cov: 11839 ft: 15219 corp: 31/710b lim: 45 exec/s: 51 rss: 70Mb L: 43/43 MS: 1 CopyPart- 00:07:55.147 [2024-10-01 14:27:37.598020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:47470a47 cdw11:47470002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.147 [2024-10-01 14:27:37.598046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.147 #52 NEW cov: 11839 ft: 15227 corp: 32/725b lim: 45 exec/s: 52 rss: 70Mb L: 15/43 MS: 1 ChangeByte- 00:07:55.147 [2024-10-01 14:27:37.648643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:47470a47 cdw11:12710000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.147 [2024-10-01 14:27:37.648668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.407 [2024-10-01 14:27:37.698712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:47470100 cdw11:12710000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.407 [2024-10-01 14:27:37.698740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.407 #54 NEW cov: 11839 ft: 15233 corp: 33/740b lim: 45 exec/s: 27 rss: 70Mb L: 15/43 MS: 2 CMP-PersAutoDict- DE: "\022q\016l\306\177\000\000"-"\001\000"- 00:07:55.407 #54 DONE cov: 11839 ft: 15233 corp: 33/740b lim: 45 exec/s: 27 rss: 70Mb 00:07:55.407 ###### Recommended dictionary. ###### 00:07:55.407 "\001\000" # Uses: 4 00:07:55.407 "\000\000\000\004" # Uses: 0 00:07:55.407 "\022q\016l\306\177\000\000" # Uses: 0 00:07:55.407 ###### End of recommended dictionary. ###### 00:07:55.407 Done 54 runs in 2 second(s) 00:07:55.407 14:27:37 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_5.conf 00:07:55.407 14:27:37 -- ../common.sh@72 -- # (( i++ )) 00:07:55.407 14:27:37 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:55.407 14:27:37 -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:55.407 14:27:37 -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:07:55.407 14:27:37 -- nvmf/run.sh@24 -- # local timen=1 00:07:55.407 14:27:37 -- nvmf/run.sh@25 -- # local core=0x1 00:07:55.407 14:27:37 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:55.407 14:27:37 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:07:55.407 14:27:37 -- nvmf/run.sh@29 -- # printf %02d 6 00:07:55.407 14:27:37 -- nvmf/run.sh@29 -- # port=4406 00:07:55.407 14:27:37 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:55.407 14:27:37 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:07:55.407 14:27:37 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:55.407 14:27:37 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 -r /var/tmp/spdk6.sock 00:07:55.407 [2024-10-01 14:27:37.896588] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:55.407 [2024-10-01 14:27:37.896658] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid702762 ] 00:07:55.666 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.925 [2024-10-01 14:27:38.208315] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.925 [2024-10-01 14:27:38.296083] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:55.925 [2024-10-01 14:27:38.296217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.925 [2024-10-01 14:27:38.355204] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.925 [2024-10-01 14:27:38.371399] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:07:55.925 INFO: Running with entropic power schedule (0xFF, 100). 00:07:55.925 INFO: Seed: 4029033117 00:07:55.925 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:55.925 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:55.925 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:55.925 INFO: A corpus is not provided, starting from an empty corpus 00:07:55.925 #2 INITED exec/s: 0 rss: 61Mb 00:07:55.925 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:55.925 This may also happen if the target rejected all inputs we tried so far 00:07:55.925 [2024-10-01 14:27:38.437761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:55.925 [2024-10-01 14:27:38.437802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.443 NEW_FUNC[1/668]: 0x445308 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:07:56.443 NEW_FUNC[2/668]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:56.443 #3 NEW cov: 11528 ft: 11530 corp: 2/3b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 CrossOver- 00:07:56.444 [2024-10-01 14:27:38.759221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:56.444 [2024-10-01 14:27:38.759276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.444 [2024-10-01 14:27:38.759385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:56.444 [2024-10-01 14:27:38.759410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.444 NEW_FUNC[1/1]: 0x1c5d708 in thread_update_stats /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:918 00:07:56.444 #4 NEW cov: 11642 ft: 12342 corp: 3/7b lim: 10 exec/s: 0 rss: 68Mb L: 4/4 MS: 1 CrossOver- 00:07:56.444 [2024-10-01 14:27:38.819111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0b cdw11:00000000 00:07:56.444 [2024-10-01 14:27:38.819138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.444 #5 NEW cov: 11648 ft: 12651 corp: 4/9b lim: 10 exec/s: 0 rss: 69Mb L: 2/4 MS: 1 ChangeBit- 00:07:56.444 [2024-10-01 14:27:38.869461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0b cdw11:00000000 00:07:56.444 [2024-10-01 14:27:38.869488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.444 #6 NEW cov: 11733 ft: 12975 corp: 5/12b lim: 10 exec/s: 0 rss: 69Mb L: 3/4 MS: 1 InsertByte- 00:07:56.444 [2024-10-01 14:27:38.929754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a8b cdw11:00000000 00:07:56.444 [2024-10-01 14:27:38.929780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.444 #7 NEW cov: 11733 ft: 13039 corp: 6/14b lim: 10 exec/s: 0 rss: 69Mb L: 2/4 MS: 1 ChangeBit- 00:07:56.702 [2024-10-01 14:27:38.981016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aa0 cdw11:00000000 00:07:56.702 [2024-10-01 14:27:38.981045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.702 [2024-10-01 14:27:38.981137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000a0a0 cdw11:00000000 00:07:56.702 [2024-10-01 14:27:38.981154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.702 [2024-10-01 14:27:38.981237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000a0a0 cdw11:00000000 00:07:56.702 [2024-10-01 14:27:38.981254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.702 [2024-10-01 14:27:38.981338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000a0a0 cdw11:00000000 00:07:56.702 [2024-10-01 14:27:38.981354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.702 [2024-10-01 14:27:38.981440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000a0a0 cdw11:00000000 00:07:56.702 [2024-10-01 14:27:38.981456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:56.702 #9 NEW cov: 11733 ft: 13447 corp: 7/24b lim: 10 exec/s: 0 rss: 69Mb L: 10/10 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:56.702 [2024-10-01 14:27:39.040235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002c0a cdw11:00000000 00:07:56.702 [2024-10-01 14:27:39.040261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.702 #10 NEW cov: 11733 ft: 13499 corp: 8/27b lim: 10 exec/s: 0 rss: 69Mb L: 3/10 MS: 1 InsertByte- 00:07:56.702 [2024-10-01 14:27:39.100498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000b4f cdw11:00000000 00:07:56.702 [2024-10-01 14:27:39.100524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.702 #13 NEW cov: 11733 ft: 13527 corp: 9/29b lim: 10 exec/s: 0 rss: 69Mb L: 2/10 MS: 3 EraseBytes-CrossOver-InsertByte- 00:07:56.702 [2024-10-01 14:27:39.150894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002c04 cdw11:00000000 00:07:56.703 [2024-10-01 14:27:39.150924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.703 #14 NEW cov: 11733 ft: 13569 corp: 10/32b lim: 10 exec/s: 0 rss: 69Mb L: 3/10 MS: 1 ChangeBinInt- 00:07:56.703 [2024-10-01 14:27:39.201692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aa0 cdw11:00000000 00:07:56.703 [2024-10-01 14:27:39.201717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.703 [2024-10-01 14:27:39.201817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000a0a0 cdw11:00000000 00:07:56.703 [2024-10-01 14:27:39.201832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.963 #20 NEW cov: 11733 ft: 13592 corp: 11/37b lim: 10 exec/s: 0 rss: 69Mb L: 5/10 MS: 1 EraseBytes- 00:07:56.963 [2024-10-01 14:27:39.262050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a8b cdw11:00000000 00:07:56.963 [2024-10-01 14:27:39.262077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.963 [2024-10-01 14:27:39.262165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:56.963 [2024-10-01 14:27:39.262181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.963 #21 NEW cov: 11733 ft: 13606 corp: 12/41b lim: 10 exec/s: 0 rss: 69Mb L: 4/10 MS: 1 CrossOver- 00:07:56.963 [2024-10-01 14:27:39.312089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002c04 cdw11:00000000 00:07:56.964 [2024-10-01 14:27:39.312115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.964 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:56.964 #22 NEW cov: 11756 ft: 13660 corp: 13/44b lim: 10 exec/s: 0 rss: 69Mb L: 3/10 MS: 1 ChangeBinInt- 00:07:56.964 [2024-10-01 14:27:39.372726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:56.964 [2024-10-01 14:27:39.372753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.964 [2024-10-01 14:27:39.372841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:56.964 [2024-10-01 14:27:39.372863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.964 [2024-10-01 14:27:39.372945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:56.964 [2024-10-01 14:27:39.372960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.964 #23 NEW cov: 11756 ft: 13795 corp: 14/51b lim: 10 exec/s: 0 rss: 69Mb L: 7/10 MS: 1 InsertRepeatedBytes- 00:07:56.964 [2024-10-01 14:27:39.423909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aa0 cdw11:00000000 00:07:56.964 [2024-10-01 14:27:39.423935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.964 [2024-10-01 14:27:39.424016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000a0a0 cdw11:00000000 00:07:56.964 [2024-10-01 14:27:39.424032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.964 [2024-10-01 14:27:39.424121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000a0a0 cdw11:00000000 00:07:56.964 [2024-10-01 14:27:39.424141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.964 [2024-10-01 14:27:39.424225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000a0a0 cdw11:00000000 00:07:56.964 [2024-10-01 14:27:39.424241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.964 [2024-10-01 14:27:39.424324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000a0a0 cdw11:00000000 00:07:56.964 [2024-10-01 14:27:39.424340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:56.964 #24 NEW cov: 11756 ft: 13858 corp: 15/61b lim: 10 exec/s: 24 rss: 69Mb L: 10/10 MS: 1 CopyPart- 00:07:56.964 [2024-10-01 14:27:39.473066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:56.964 [2024-10-01 14:27:39.473094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.223 #25 NEW cov: 11756 ft: 13902 corp: 16/64b lim: 10 exec/s: 25 rss: 69Mb L: 3/10 MS: 1 CMP- DE: "\000\000"- 00:07:57.223 [2024-10-01 14:27:39.533830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002c04 cdw11:00000000 00:07:57.223 [2024-10-01 14:27:39.533858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.223 [2024-10-01 14:27:39.533941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00005d8b cdw11:00000000 00:07:57.223 [2024-10-01 14:27:39.533958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.223 #26 NEW cov: 11756 ft: 13923 corp: 17/68b lim: 10 exec/s: 26 rss: 69Mb L: 4/10 MS: 1 InsertByte- 00:07:57.223 [2024-10-01 14:27:39.583737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:57.223 [2024-10-01 14:27:39.583766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.223 #28 NEW cov: 11756 ft: 13999 corp: 18/71b lim: 10 exec/s: 28 rss: 69Mb L: 3/10 MS: 2 EraseBytes-PersAutoDict- DE: "\000\000"- 00:07:57.223 [2024-10-01 14:27:39.633925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ac04 cdw11:00000000 00:07:57.223 [2024-10-01 14:27:39.633951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.223 #29 NEW cov: 11756 ft: 14014 corp: 19/74b lim: 10 exec/s: 29 rss: 69Mb L: 3/10 MS: 1 ChangeBit- 00:07:57.223 [2024-10-01 14:27:39.684427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aa0 cdw11:00000000 00:07:57.223 [2024-10-01 14:27:39.684453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.223 [2024-10-01 14:27:39.684546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000a059 cdw11:00000000 00:07:57.223 [2024-10-01 14:27:39.684561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.223 #30 NEW cov: 11756 ft: 14040 corp: 20/79b lim: 10 exec/s: 30 rss: 69Mb L: 5/10 MS: 1 ChangeBinInt- 00:07:57.223 [2024-10-01 14:27:39.745075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:57.223 [2024-10-01 14:27:39.745102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.223 [2024-10-01 14:27:39.745186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:57.223 [2024-10-01 14:27:39.745202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.223 [2024-10-01 14:27:39.745287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:57.223 [2024-10-01 14:27:39.745303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.482 #31 NEW cov: 11756 ft: 14123 corp: 21/85b lim: 10 exec/s: 31 rss: 69Mb L: 6/10 MS: 1 EraseBytes- 00:07:57.482 [2024-10-01 14:27:39.805781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aa0 cdw11:00000000 00:07:57.482 [2024-10-01 14:27:39.805810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.482 [2024-10-01 14:27:39.805886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000a0a0 cdw11:00000000 00:07:57.482 [2024-10-01 14:27:39.805903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.482 [2024-10-01 14:27:39.805991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000a0a0 cdw11:00000000 00:07:57.482 [2024-10-01 14:27:39.806005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.482 [2024-10-01 14:27:39.806088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000a0a0 cdw11:00000000 00:07:57.482 [2024-10-01 14:27:39.806103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.482 [2024-10-01 14:27:39.806187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000a0a0 cdw11:00000000 00:07:57.482 [2024-10-01 14:27:39.806202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:57.482 #32 NEW cov: 11756 ft: 14131 corp: 22/95b lim: 10 exec/s: 32 rss: 69Mb L: 10/10 MS: 1 ShuffleBytes- 00:07:57.482 [2024-10-01 14:27:39.855742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002c04 cdw11:00000000 00:07:57.482 [2024-10-01 14:27:39.855767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.482 [2024-10-01 14:27:39.855851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ecec cdw11:00000000 00:07:57.482 [2024-10-01 14:27:39.855867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.482 [2024-10-01 14:27:39.855945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ecec cdw11:00000000 00:07:57.482 [2024-10-01 14:27:39.855961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.482 [2024-10-01 14:27:39.856042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ec5d cdw11:00000000 00:07:57.482 [2024-10-01 14:27:39.856057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.482 #33 NEW cov: 11756 ft: 14173 corp: 23/104b lim: 10 exec/s: 33 rss: 69Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:07:57.482 [2024-10-01 14:27:39.915307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a30 cdw11:00000000 00:07:57.482 [2024-10-01 14:27:39.915332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.482 #34 NEW cov: 11756 ft: 14211 corp: 24/106b lim: 10 exec/s: 34 rss: 70Mb L: 2/10 MS: 1 EraseBytes- 00:07:57.482 [2024-10-01 14:27:39.966069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:57.482 [2024-10-01 14:27:39.966095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.482 [2024-10-01 14:27:39.966184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:57.482 [2024-10-01 14:27:39.966199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.482 #36 NEW cov: 11756 ft: 14215 corp: 25/111b lim: 10 exec/s: 36 rss: 70Mb L: 5/10 MS: 2 CrossOver-CrossOver- 00:07:57.741 [2024-10-01 14:27:40.026407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000b4f cdw11:00000000 00:07:57.741 [2024-10-01 14:27:40.026435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.741 #37 NEW cov: 11756 ft: 14283 corp: 26/114b lim: 10 exec/s: 37 rss: 70Mb L: 3/10 MS: 1 CopyPart- 00:07:57.741 [2024-10-01 14:27:40.086649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002b04 cdw11:00000000 00:07:57.741 [2024-10-01 14:27:40.086675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.741 #38 NEW cov: 11756 ft: 14303 corp: 27/117b lim: 10 exec/s: 38 rss: 70Mb L: 3/10 MS: 1 ChangeByte- 00:07:57.741 [2024-10-01 14:27:40.137687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:57.741 [2024-10-01 14:27:40.137713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.741 [2024-10-01 14:27:40.137800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a8c cdw11:00000000 00:07:57.741 [2024-10-01 14:27:40.137816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.741 [2024-10-01 14:27:40.137912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00008c8c cdw11:00000000 00:07:57.741 [2024-10-01 14:27:40.137928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.741 [2024-10-01 14:27:40.138007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:57.741 [2024-10-01 14:27:40.138022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.741 #39 NEW cov: 11756 ft: 14311 corp: 28/126b lim: 10 exec/s: 39 rss: 70Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:07:57.741 [2024-10-01 14:27:40.197238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:07:57.741 [2024-10-01 14:27:40.197265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.741 [2024-10-01 14:27:40.197355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.741 [2024-10-01 14:27:40.197372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.741 #41 NEW cov: 11756 ft: 14361 corp: 29/131b lim: 10 exec/s: 41 rss: 70Mb L: 5/10 MS: 2 EraseBytes-CMP- DE: "\377\377\377\002"- 00:07:57.741 [2024-10-01 14:27:40.257273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001070 cdw11:00000000 00:07:57.741 [2024-10-01 14:27:40.257298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.000 #43 NEW cov: 11756 ft: 14368 corp: 30/133b lim: 10 exec/s: 43 rss: 70Mb L: 2/10 MS: 2 ChangeBinInt-InsertByte- 00:07:58.000 [2024-10-01 14:27:40.308319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aa0 cdw11:00000000 00:07:58.000 [2024-10-01 14:27:40.308345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.000 [2024-10-01 14:27:40.308422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000a0a0 cdw11:00000000 00:07:58.000 [2024-10-01 14:27:40.308440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.000 [2024-10-01 14:27:40.308521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000a0a0 cdw11:00000000 00:07:58.000 [2024-10-01 14:27:40.308536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.000 [2024-10-01 14:27:40.308617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000a0a0 cdw11:00000000 00:07:58.000 [2024-10-01 14:27:40.308632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.000 #44 NEW cov: 11756 ft: 14392 corp: 31/141b lim: 10 exec/s: 44 rss: 70Mb L: 8/10 MS: 1 EraseBytes- 00:07:58.000 [2024-10-01 14:27:40.358271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001070 cdw11:00000000 00:07:58.000 [2024-10-01 14:27:40.358298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.000 [2024-10-01 14:27:40.358383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:58.000 [2024-10-01 14:27:40.358399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.000 [2024-10-01 14:27:40.358476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff02 cdw11:00000000 00:07:58.000 [2024-10-01 14:27:40.358495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.000 #45 NEW cov: 11756 ft: 14405 corp: 32/147b lim: 10 exec/s: 45 rss: 70Mb L: 6/10 MS: 1 PersAutoDict- DE: "\377\377\377\002"- 00:07:58.000 [2024-10-01 14:27:40.418139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:58.000 [2024-10-01 14:27:40.418164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.000 [2024-10-01 14:27:40.418246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:58.000 [2024-10-01 14:27:40.418262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.000 #46 NEW cov: 11756 ft: 14428 corp: 33/151b lim: 10 exec/s: 23 rss: 70Mb L: 4/10 MS: 1 ShuffleBytes- 00:07:58.000 #46 DONE cov: 11756 ft: 14428 corp: 33/151b lim: 10 exec/s: 23 rss: 70Mb 00:07:58.000 ###### Recommended dictionary. ###### 00:07:58.000 "\000\000" # Uses: 1 00:07:58.000 "\377\377\377\002" # Uses: 1 00:07:58.000 ###### End of recommended dictionary. ###### 00:07:58.000 Done 46 runs in 2 second(s) 00:07:58.259 14:27:40 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_6.conf 00:07:58.259 14:27:40 -- ../common.sh@72 -- # (( i++ )) 00:07:58.259 14:27:40 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:58.259 14:27:40 -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:07:58.259 14:27:40 -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:07:58.259 14:27:40 -- nvmf/run.sh@24 -- # local timen=1 00:07:58.259 14:27:40 -- nvmf/run.sh@25 -- # local core=0x1 00:07:58.259 14:27:40 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:58.259 14:27:40 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:07:58.259 14:27:40 -- nvmf/run.sh@29 -- # printf %02d 7 00:07:58.259 14:27:40 -- nvmf/run.sh@29 -- # port=4407 00:07:58.259 14:27:40 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:58.259 14:27:40 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:07:58.259 14:27:40 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:58.260 14:27:40 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 -r /var/tmp/spdk7.sock 00:07:58.260 [2024-10-01 14:27:40.609290] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:58.260 [2024-10-01 14:27:40.609360] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid703132 ] 00:07:58.260 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.518 [2024-10-01 14:27:40.926421] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.518 [2024-10-01 14:27:41.009579] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:58.518 [2024-10-01 14:27:41.009729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.776 [2024-10-01 14:27:41.068350] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:58.776 [2024-10-01 14:27:41.084564] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:07:58.776 INFO: Running with entropic power schedule (0xFF, 100). 00:07:58.776 INFO: Seed: 2448054908 00:07:58.776 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:58.776 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:58.776 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:58.776 INFO: A corpus is not provided, starting from an empty corpus 00:07:58.776 #2 INITED exec/s: 0 rss: 61Mb 00:07:58.776 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:58.776 This may also happen if the target rejected all inputs we tried so far 00:07:58.776 [2024-10-01 14:27:41.129924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d7d7 cdw11:00000000 00:07:58.776 [2024-10-01 14:27:41.129954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.776 [2024-10-01 14:27:41.130006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000d7d7 cdw11:00000000 00:07:58.776 [2024-10-01 14:27:41.130020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.035 NEW_FUNC[1/669]: 0x445d08 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:07:59.035 NEW_FUNC[2/669]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:59.035 #4 NEW cov: 11529 ft: 11530 corp: 2/6b lim: 10 exec/s: 0 rss: 68Mb L: 5/5 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:59.035 [2024-10-01 14:27:41.450680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d728 cdw11:00000000 00:07:59.035 [2024-10-01 14:27:41.450717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.035 [2024-10-01 14:27:41.450772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000d7d7 cdw11:00000000 00:07:59.035 [2024-10-01 14:27:41.450786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.035 #5 NEW cov: 11642 ft: 11849 corp: 3/11b lim: 10 exec/s: 0 rss: 68Mb L: 5/5 MS: 1 ChangeByte- 00:07:59.035 [2024-10-01 14:27:41.500885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:59.035 [2024-10-01 14:27:41.500912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.035 [2024-10-01 14:27:41.500965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:59.035 [2024-10-01 14:27:41.500979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.035 [2024-10-01 14:27:41.501029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:59.035 [2024-10-01 14:27:41.501042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.035 #6 NEW cov: 11648 ft: 12406 corp: 4/18b lim: 10 exec/s: 0 rss: 68Mb L: 7/7 MS: 1 InsertRepeatedBytes- 00:07:59.035 [2024-10-01 14:27:41.540975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:59.035 [2024-10-01 14:27:41.541001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.035 [2024-10-01 14:27:41.541053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 00:07:59.035 [2024-10-01 14:27:41.541066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.035 [2024-10-01 14:27:41.541115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:59.035 [2024-10-01 14:27:41.541128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.293 #7 NEW cov: 11733 ft: 12618 corp: 5/25b lim: 10 exec/s: 0 rss: 69Mb L: 7/7 MS: 1 ChangeBinInt- 00:07:59.293 [2024-10-01 14:27:41.580968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d7d7 cdw11:00000000 00:07:59.293 [2024-10-01 14:27:41.580992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.293 [2024-10-01 14:27:41.581043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000d7d7 cdw11:00000000 00:07:59.293 [2024-10-01 14:27:41.581057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.293 #8 NEW cov: 11733 ft: 12777 corp: 6/30b lim: 10 exec/s: 0 rss: 69Mb L: 5/7 MS: 1 CopyPart- 00:07:59.293 [2024-10-01 14:27:41.621419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d7d7 cdw11:00000000 00:07:59.293 [2024-10-01 14:27:41.621443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.293 [2024-10-01 14:27:41.621491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:59.293 [2024-10-01 14:27:41.621504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.293 [2024-10-01 14:27:41.621553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:59.293 [2024-10-01 14:27:41.621567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.294 [2024-10-01 14:27:41.621615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffd7 cdw11:00000000 00:07:59.294 [2024-10-01 14:27:41.621628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:59.294 [2024-10-01 14:27:41.621677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000d7d7 cdw11:00000000 00:07:59.294 [2024-10-01 14:27:41.621689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:59.294 #9 NEW cov: 11733 ft: 13100 corp: 7/40b lim: 10 exec/s: 0 rss: 69Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:59.294 [2024-10-01 14:27:41.661191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002f28 cdw11:00000000 00:07:59.294 [2024-10-01 14:27:41.661215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.294 [2024-10-01 14:27:41.661265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000d7d7 cdw11:00000000 00:07:59.294 [2024-10-01 14:27:41.661279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.294 #10 NEW cov: 11733 ft: 13167 corp: 8/45b lim: 10 exec/s: 0 rss: 69Mb L: 5/10 MS: 1 ChangeBinInt- 00:07:59.294 [2024-10-01 14:27:41.701326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002f28 cdw11:00000000 00:07:59.294 [2024-10-01 14:27:41.701350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.294 [2024-10-01 14:27:41.701400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000d5d7 cdw11:00000000 00:07:59.294 [2024-10-01 14:27:41.701414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.294 #11 NEW cov: 11733 ft: 13248 corp: 9/50b lim: 10 exec/s: 0 rss: 69Mb L: 5/10 MS: 1 ChangeBit- 00:07:59.294 [2024-10-01 14:27:41.741294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000000a8 cdw11:00000000 00:07:59.294 [2024-10-01 14:27:41.741319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.294 #14 NEW cov: 11733 ft: 13486 corp: 10/52b lim: 10 exec/s: 0 rss: 69Mb L: 2/10 MS: 3 ShuffleBytes-CrossOver-InsertByte- 00:07:59.294 [2024-10-01 14:27:41.781539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:59.294 [2024-10-01 14:27:41.781563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.294 [2024-10-01 14:27:41.781630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:59.294 [2024-10-01 14:27:41.781644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.294 #15 NEW cov: 11733 ft: 13512 corp: 11/57b lim: 10 exec/s: 0 rss: 69Mb L: 5/10 MS: 1 EraseBytes- 00:07:59.553 [2024-10-01 14:27:41.821664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d7d7 cdw11:00000000 00:07:59.553 [2024-10-01 14:27:41.821689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.553 [2024-10-01 14:27:41.821738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000d70a cdw11:00000000 00:07:59.553 [2024-10-01 14:27:41.821753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.553 #16 NEW cov: 11733 ft: 13523 corp: 12/61b lim: 10 exec/s: 0 rss: 69Mb L: 4/10 MS: 1 EraseBytes- 00:07:59.553 [2024-10-01 14:27:41.861979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e3e3 cdw11:00000000 00:07:59.553 [2024-10-01 14:27:41.862003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.553 [2024-10-01 14:27:41.862053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e3e3 cdw11:00000000 00:07:59.553 [2024-10-01 14:27:41.862067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.553 [2024-10-01 14:27:41.862113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000e3e3 cdw11:00000000 00:07:59.553 [2024-10-01 14:27:41.862130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.553 [2024-10-01 14:27:41.862178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000e300 cdw11:00000000 00:07:59.553 [2024-10-01 14:27:41.862191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:59.553 #17 NEW cov: 11733 ft: 13616 corp: 13/70b lim: 10 exec/s: 0 rss: 69Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:07:59.553 [2024-10-01 14:27:41.902246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d7ff cdw11:00000000 00:07:59.553 [2024-10-01 14:27:41.902271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.553 [2024-10-01 14:27:41.902338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:59.553 [2024-10-01 14:27:41.902352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.553 [2024-10-01 14:27:41.902402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:59.553 [2024-10-01 14:27:41.902415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.553 [2024-10-01 14:27:41.902464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:000028d7 cdw11:00000000 00:07:59.553 [2024-10-01 14:27:41.902477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:59.553 [2024-10-01 14:27:41.902527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000d70a cdw11:00000000 00:07:59.553 [2024-10-01 14:27:41.902540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:59.553 #18 NEW cov: 11733 ft: 13646 corp: 14/80b lim: 10 exec/s: 0 rss: 69Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:59.553 [2024-10-01 14:27:41.942340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d5d7 cdw11:00000000 00:07:59.553 [2024-10-01 14:27:41.942363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.553 [2024-10-01 14:27:41.942429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:59.553 [2024-10-01 14:27:41.942443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.553 [2024-10-01 14:27:41.942491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:59.553 [2024-10-01 14:27:41.942504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.553 [2024-10-01 14:27:41.942552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffd7 cdw11:00000000 00:07:59.553 [2024-10-01 14:27:41.942565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:59.553 [2024-10-01 14:27:41.942613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000d7d7 cdw11:00000000 00:07:59.553 [2024-10-01 14:27:41.942626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:59.553 #19 NEW cov: 11733 ft: 13698 corp: 15/90b lim: 10 exec/s: 0 rss: 69Mb L: 10/10 MS: 1 ChangeBinInt- 00:07:59.553 [2024-10-01 14:27:41.982443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d7ff cdw11:00000000 00:07:59.553 [2024-10-01 14:27:41.982469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.553 [2024-10-01 14:27:41.982523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff28 cdw11:00000000 00:07:59.553 [2024-10-01 14:27:41.982537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.553 [2024-10-01 14:27:41.982587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:59.553 [2024-10-01 14:27:41.982600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.553 [2024-10-01 14:27:41.982647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffd7 cdw11:00000000 00:07:59.553 [2024-10-01 14:27:41.982660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:59.553 [2024-10-01 14:27:41.982711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000d70a cdw11:00000000 00:07:59.553 [2024-10-01 14:27:41.982729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:59.553 #20 NEW cov: 11733 ft: 13737 corp: 16/100b lim: 10 exec/s: 0 rss: 69Mb L: 10/10 MS: 1 ShuffleBytes- 00:07:59.553 [2024-10-01 14:27:42.022545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002f28 cdw11:00000000 00:07:59.553 [2024-10-01 14:27:42.022569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.553 [2024-10-01 14:27:42.022633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000d5d7 cdw11:00000000 00:07:59.553 [2024-10-01 14:27:42.022647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.553 [2024-10-01 14:27:42.022697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000a2f cdw11:00000000 00:07:59.553 [2024-10-01 14:27:42.022710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.553 [2024-10-01 14:27:42.022764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:000028d5 cdw11:00000000 00:07:59.553 [2024-10-01 14:27:42.022778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:59.553 [2024-10-01 14:27:42.022824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000d70a cdw11:00000000 00:07:59.553 [2024-10-01 14:27:42.022838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:59.553 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:59.553 #21 NEW cov: 11756 ft: 13772 corp: 17/110b lim: 10 exec/s: 0 rss: 69Mb L: 10/10 MS: 1 CopyPart- 00:07:59.553 [2024-10-01 14:27:42.072266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000004a8 cdw11:00000000 00:07:59.553 [2024-10-01 14:27:42.072292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.812 #22 NEW cov: 11756 ft: 13833 corp: 18/112b lim: 10 exec/s: 0 rss: 69Mb L: 2/10 MS: 1 ChangeBit- 00:07:59.812 [2024-10-01 14:27:42.112473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d7d7 cdw11:00000000 00:07:59.812 [2024-10-01 14:27:42.112497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.812 [2024-10-01 14:27:42.112546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000d7d7 cdw11:00000000 00:07:59.812 [2024-10-01 14:27:42.112559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.812 #23 NEW cov: 11756 ft: 13848 corp: 19/117b lim: 10 exec/s: 23 rss: 69Mb L: 5/10 MS: 1 ShuffleBytes- 00:07:59.812 [2024-10-01 14:27:42.152974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d7ff cdw11:00000000 00:07:59.812 [2024-10-01 14:27:42.152998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.812 [2024-10-01 14:27:42.153046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff28 cdw11:00000000 00:07:59.812 [2024-10-01 14:27:42.153060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.812 [2024-10-01 14:27:42.153108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:59.812 [2024-10-01 14:27:42.153122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.812 [2024-10-01 14:27:42.153171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffd7 cdw11:00000000 00:07:59.812 [2024-10-01 14:27:42.153184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:59.812 [2024-10-01 14:27:42.153231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000d70a cdw11:00000000 00:07:59.812 [2024-10-01 14:27:42.153244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:59.812 #24 NEW cov: 11756 ft: 13891 corp: 20/127b lim: 10 exec/s: 24 rss: 69Mb L: 10/10 MS: 1 CopyPart- 00:07:59.812 [2024-10-01 14:27:42.192963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:59.812 [2024-10-01 14:27:42.192988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.812 [2024-10-01 14:27:42.193056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 00:07:59.812 [2024-10-01 14:27:42.193071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.812 [2024-10-01 14:27:42.193118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:59.813 [2024-10-01 14:27:42.193131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.813 [2024-10-01 14:27:42.193181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:59.813 [2024-10-01 14:27:42.193194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:59.813 #25 NEW cov: 11756 ft: 13912 corp: 21/136b lim: 10 exec/s: 25 rss: 69Mb L: 9/10 MS: 1 CrossOver- 00:07:59.813 [2024-10-01 14:27:42.232758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d5d7 cdw11:00000000 00:07:59.813 [2024-10-01 14:27:42.232784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.813 #26 NEW cov: 11756 ft: 13961 corp: 22/138b lim: 10 exec/s: 26 rss: 70Mb L: 2/10 MS: 1 CrossOver- 00:07:59.813 [2024-10-01 14:27:42.273258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d7ff cdw11:00000000 00:07:59.813 [2024-10-01 14:27:42.273283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.813 [2024-10-01 14:27:42.273333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:59.813 [2024-10-01 14:27:42.273346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.813 [2024-10-01 14:27:42.273397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:59.813 [2024-10-01 14:27:42.273410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.813 [2024-10-01 14:27:42.273459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:000028d7 cdw11:00000000 00:07:59.813 [2024-10-01 14:27:42.273471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:59.813 [2024-10-01 14:27:42.273520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000d70a cdw11:00000000 00:07:59.813 [2024-10-01 14:27:42.273533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:59.813 #27 NEW cov: 11756 ft: 13969 corp: 23/148b lim: 10 exec/s: 27 rss: 70Mb L: 10/10 MS: 1 CopyPart- 00:07:59.813 [2024-10-01 14:27:42.313391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d7ff cdw11:00000000 00:07:59.813 [2024-10-01 14:27:42.313416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.813 [2024-10-01 14:27:42.313467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff26 cdw11:00000000 00:07:59.813 [2024-10-01 14:27:42.313480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.813 [2024-10-01 14:27:42.313529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:59.813 [2024-10-01 14:27:42.313542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.813 [2024-10-01 14:27:42.313591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffd7 cdw11:00000000 00:07:59.813 [2024-10-01 14:27:42.313604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:59.813 [2024-10-01 14:27:42.313651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000d70a cdw11:00000000 00:07:59.813 [2024-10-01 14:27:42.313664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:59.813 #28 NEW cov: 11756 ft: 13975 corp: 24/158b lim: 10 exec/s: 28 rss: 70Mb L: 10/10 MS: 1 ChangeByte- 00:08:00.072 [2024-10-01 14:27:42.353051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d5d7 cdw11:00000000 00:08:00.072 [2024-10-01 14:27:42.353077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.072 #29 NEW cov: 11756 ft: 14080 corp: 25/160b lim: 10 exec/s: 29 rss: 70Mb L: 2/10 MS: 1 ShuffleBytes- 00:08:00.072 [2024-10-01 14:27:42.393283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d7d7 cdw11:00000000 00:08:00.072 [2024-10-01 14:27:42.393309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.072 [2024-10-01 14:27:42.393358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000057d7 cdw11:00000000 00:08:00.072 [2024-10-01 14:27:42.393372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.072 #30 NEW cov: 11756 ft: 14101 corp: 26/165b lim: 10 exec/s: 30 rss: 70Mb L: 5/10 MS: 1 ChangeBit- 00:08:00.072 [2024-10-01 14:27:42.433644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e3e3 cdw11:00000000 00:08:00.072 [2024-10-01 14:27:42.433680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.072 [2024-10-01 14:27:42.433756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e6e3 cdw11:00000000 00:08:00.072 [2024-10-01 14:27:42.433771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.072 [2024-10-01 14:27:42.433832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000e3e3 cdw11:00000000 00:08:00.072 [2024-10-01 14:27:42.433845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.072 [2024-10-01 14:27:42.433892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000e300 cdw11:00000000 00:08:00.072 [2024-10-01 14:27:42.433905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:00.072 #31 NEW cov: 11756 ft: 14185 corp: 27/174b lim: 10 exec/s: 31 rss: 70Mb L: 9/10 MS: 1 ChangeBinInt- 00:08:00.072 [2024-10-01 14:27:42.473868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e3e3 cdw11:00000000 00:08:00.072 [2024-10-01 14:27:42.473893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.072 [2024-10-01 14:27:42.473943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e3e3 cdw11:00000000 00:08:00.072 [2024-10-01 14:27:42.473957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.072 [2024-10-01 14:27:42.474005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000e3e3 cdw11:00000000 00:08:00.072 [2024-10-01 14:27:42.474018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.072 [2024-10-01 14:27:42.474066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000e300 cdw11:00000000 00:08:00.072 [2024-10-01 14:27:42.474079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:00.072 [2024-10-01 14:27:42.474126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000a8fe cdw11:00000000 00:08:00.072 [2024-10-01 14:27:42.474139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:00.072 #32 NEW cov: 11756 ft: 14201 corp: 28/184b lim: 10 exec/s: 32 rss: 70Mb L: 10/10 MS: 1 InsertByte- 00:08:00.072 [2024-10-01 14:27:42.513962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d7ff cdw11:00000000 00:08:00.072 [2024-10-01 14:27:42.513988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.072 [2024-10-01 14:27:42.514022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:00.072 [2024-10-01 14:27:42.514035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.072 [2024-10-01 14:27:42.514084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000028ff cdw11:00000000 00:08:00.072 [2024-10-01 14:27:42.514097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.072 [2024-10-01 14:27:42.514162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffd7 cdw11:00000000 00:08:00.073 [2024-10-01 14:27:42.514175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:00.073 [2024-10-01 14:27:42.514225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000d70a cdw11:00000000 00:08:00.073 [2024-10-01 14:27:42.514240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:00.073 #33 NEW cov: 11756 ft: 14211 corp: 29/194b lim: 10 exec/s: 33 rss: 70Mb L: 10/10 MS: 1 ShuffleBytes- 00:08:00.073 [2024-10-01 14:27:42.553969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d7d7 cdw11:00000000 00:08:00.073 [2024-10-01 14:27:42.553994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.073 [2024-10-01 14:27:42.554045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:00.073 [2024-10-01 14:27:42.554058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.073 [2024-10-01 14:27:42.554107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:00.073 [2024-10-01 14:27:42.554120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.073 [2024-10-01 14:27:42.554167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffd7 cdw11:00000000 00:08:00.073 [2024-10-01 14:27:42.554180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:00.073 #34 NEW cov: 11756 ft: 14212 corp: 30/203b lim: 10 exec/s: 34 rss: 70Mb L: 9/10 MS: 1 EraseBytes- 00:08:00.073 [2024-10-01 14:27:42.594226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d5d7 cdw11:00000000 00:08:00.073 [2024-10-01 14:27:42.594250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.073 [2024-10-01 14:27:42.594302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000fff7 cdw11:00000000 00:08:00.073 [2024-10-01 14:27:42.594315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.073 [2024-10-01 14:27:42.594363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:00.073 [2024-10-01 14:27:42.594376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.073 [2024-10-01 14:27:42.594424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffd7 cdw11:00000000 00:08:00.073 [2024-10-01 14:27:42.594437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:00.073 [2024-10-01 14:27:42.594485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000d7d7 cdw11:00000000 00:08:00.073 [2024-10-01 14:27:42.594498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:00.332 #35 NEW cov: 11756 ft: 14277 corp: 31/213b lim: 10 exec/s: 35 rss: 70Mb L: 10/10 MS: 1 ChangeBit- 00:08:00.332 [2024-10-01 14:27:42.634297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d7ff cdw11:00000000 00:08:00.332 [2024-10-01 14:27:42.634322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.332 [2024-10-01 14:27:42.634372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:00.332 [2024-10-01 14:27:42.634385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.332 [2024-10-01 14:27:42.634434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:00.332 [2024-10-01 14:27:42.634446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.332 [2024-10-01 14:27:42.634502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:000028d7 cdw11:00000000 00:08:00.332 [2024-10-01 14:27:42.634515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:00.332 [2024-10-01 14:27:42.634565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000d70a cdw11:00000000 00:08:00.332 [2024-10-01 14:27:42.634577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:00.332 #36 NEW cov: 11756 ft: 14295 corp: 32/223b lim: 10 exec/s: 36 rss: 70Mb L: 10/10 MS: 1 CopyPart- 00:08:00.332 [2024-10-01 14:27:42.674430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d7ff cdw11:00000000 00:08:00.332 [2024-10-01 14:27:42.674454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.332 [2024-10-01 14:27:42.674503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000026ff cdw11:00000000 00:08:00.332 [2024-10-01 14:27:42.674517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.332 [2024-10-01 14:27:42.674567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:00.332 [2024-10-01 14:27:42.674579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.332 [2024-10-01 14:27:42.674628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffd7 cdw11:00000000 00:08:00.332 [2024-10-01 14:27:42.674640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:00.332 [2024-10-01 14:27:42.674691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000d70a cdw11:00000000 00:08:00.332 [2024-10-01 14:27:42.674704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:00.332 [2024-10-01 14:27:42.714513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000026ff cdw11:00000000 00:08:00.332 [2024-10-01 14:27:42.714537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.332 [2024-10-01 14:27:42.714588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:00.332 [2024-10-01 14:27:42.714601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.332 [2024-10-01 14:27:42.714649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffd7 cdw11:00000000 00:08:00.332 [2024-10-01 14:27:42.714662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.332 [2024-10-01 14:27:42.714711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000d70a cdw11:00000000 00:08:00.332 [2024-10-01 14:27:42.714728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:00.332 [2024-10-01 14:27:42.714777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000d70a cdw11:00000000 00:08:00.332 [2024-10-01 14:27:42.714790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:00.332 #38 NEW cov: 11756 ft: 14300 corp: 33/233b lim: 10 exec/s: 38 rss: 70Mb L: 10/10 MS: 2 ShuffleBytes-CopyPart- 00:08:00.332 [2024-10-01 14:27:42.754608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d7ff cdw11:00000000 00:08:00.332 [2024-10-01 14:27:42.754634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.332 [2024-10-01 14:27:42.754702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:00.332 [2024-10-01 14:27:42.754716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.332 [2024-10-01 14:27:42.754772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff6c cdw11:00000000 00:08:00.332 [2024-10-01 14:27:42.754786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.332 [2024-10-01 14:27:42.754834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:000028d7 cdw11:00000000 00:08:00.332 [2024-10-01 14:27:42.754847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:00.332 [2024-10-01 14:27:42.754893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000d70a cdw11:00000000 00:08:00.332 [2024-10-01 14:27:42.754906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:00.332 #39 NEW cov: 11756 ft: 14342 corp: 34/243b lim: 10 exec/s: 39 rss: 70Mb L: 10/10 MS: 1 ChangeByte- 00:08:00.332 [2024-10-01 14:27:42.794520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:00.332 [2024-10-01 14:27:42.794544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.332 [2024-10-01 14:27:42.794611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:00.332 [2024-10-01 14:27:42.794625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.332 [2024-10-01 14:27:42.794674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000700 cdw11:00000000 00:08:00.332 [2024-10-01 14:27:42.794687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.332 #40 NEW cov: 11756 ft: 14359 corp: 35/250b lim: 10 exec/s: 40 rss: 70Mb L: 7/10 MS: 1 ShuffleBytes- 00:08:00.332 [2024-10-01 14:27:42.834664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:00.332 [2024-10-01 14:27:42.834688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.332 [2024-10-01 14:27:42.834752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:00.332 [2024-10-01 14:27:42.834766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.332 [2024-10-01 14:27:42.834815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000700 cdw11:00000000 00:08:00.332 [2024-10-01 14:27:42.834828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.591 #41 NEW cov: 11756 ft: 14371 corp: 36/257b lim: 10 exec/s: 41 rss: 70Mb L: 7/10 MS: 1 ChangeBit- 00:08:00.591 [2024-10-01 14:27:42.874547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffd5 cdw11:00000000 00:08:00.591 [2024-10-01 14:27:42.874571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.591 #42 NEW cov: 11756 ft: 14403 corp: 37/260b lim: 10 exec/s: 42 rss: 70Mb L: 3/10 MS: 1 InsertByte- 00:08:00.591 [2024-10-01 14:27:42.914846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d7d7 cdw11:00000000 00:08:00.591 [2024-10-01 14:27:42.914873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.591 [2024-10-01 14:27:42.914925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005730 cdw11:00000000 00:08:00.591 [2024-10-01 14:27:42.914939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.591 [2024-10-01 14:27:42.914989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000d70a cdw11:00000000 00:08:00.591 [2024-10-01 14:27:42.915002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.591 #43 NEW cov: 11756 ft: 14420 corp: 38/266b lim: 10 exec/s: 43 rss: 70Mb L: 6/10 MS: 1 InsertByte- 00:08:00.591 [2024-10-01 14:27:42.955199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000031ff cdw11:00000000 00:08:00.591 [2024-10-01 14:27:42.955223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.591 [2024-10-01 14:27:42.955275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:00.591 [2024-10-01 14:27:42.955288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.591 [2024-10-01 14:27:42.955339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000028ff cdw11:00000000 00:08:00.591 [2024-10-01 14:27:42.955352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.591 [2024-10-01 14:27:42.955402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffd7 cdw11:00000000 00:08:00.591 [2024-10-01 14:27:42.955415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:00.591 [2024-10-01 14:27:42.955461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000d70a cdw11:00000000 00:08:00.591 [2024-10-01 14:27:42.955475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:00.591 #44 NEW cov: 11756 ft: 14438 corp: 39/276b lim: 10 exec/s: 44 rss: 70Mb L: 10/10 MS: 1 ChangeByte- 00:08:00.591 [2024-10-01 14:27:42.995084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000800 cdw11:00000000 00:08:00.591 [2024-10-01 14:27:42.995108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.591 [2024-10-01 14:27:42.995160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:00.591 [2024-10-01 14:27:42.995173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.591 [2024-10-01 14:27:42.995224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:00.592 [2024-10-01 14:27:42.995238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.592 #45 NEW cov: 11756 ft: 14452 corp: 40/283b lim: 10 exec/s: 45 rss: 70Mb L: 7/10 MS: 1 ChangeBinInt- 00:08:00.592 [2024-10-01 14:27:43.035024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d5d7 cdw11:00000000 00:08:00.592 [2024-10-01 14:27:43.035048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.592 [2024-10-01 14:27:43.035117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002f28 cdw11:00000000 00:08:00.592 [2024-10-01 14:27:43.035134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.592 #46 NEW cov: 11756 ft: 14493 corp: 41/288b lim: 10 exec/s: 46 rss: 70Mb L: 5/10 MS: 1 ShuffleBytes- 00:08:00.592 [2024-10-01 14:27:43.075283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:00.592 [2024-10-01 14:27:43.075308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.592 [2024-10-01 14:27:43.075377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 00:08:00.592 [2024-10-01 14:27:43.075390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.592 [2024-10-01 14:27:43.075442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000700 cdw11:00000000 00:08:00.592 [2024-10-01 14:27:43.075456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.592 #47 NEW cov: 11756 ft: 14498 corp: 42/295b lim: 10 exec/s: 47 rss: 70Mb L: 7/10 MS: 1 CopyPart- 00:08:00.592 [2024-10-01 14:27:43.115231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002fd7 cdw11:00000000 00:08:00.592 [2024-10-01 14:27:43.115257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.851 #48 NEW cov: 11756 ft: 14541 corp: 43/298b lim: 10 exec/s: 24 rss: 70Mb L: 3/10 MS: 1 EraseBytes- 00:08:00.851 #48 DONE cov: 11756 ft: 14541 corp: 43/298b lim: 10 exec/s: 24 rss: 70Mb 00:08:00.851 Done 48 runs in 2 second(s) 00:08:00.851 14:27:43 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_7.conf 00:08:00.851 14:27:43 -- ../common.sh@72 -- # (( i++ )) 00:08:00.851 14:27:43 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:00.851 14:27:43 -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:08:00.851 14:27:43 -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:08:00.851 14:27:43 -- nvmf/run.sh@24 -- # local timen=1 00:08:00.851 14:27:43 -- nvmf/run.sh@25 -- # local core=0x1 00:08:00.851 14:27:43 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:08:00.851 14:27:43 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:08:00.851 14:27:43 -- nvmf/run.sh@29 -- # printf %02d 8 00:08:00.851 14:27:43 -- nvmf/run.sh@29 -- # port=4408 00:08:00.851 14:27:43 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:08:00.851 14:27:43 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:08:00.851 14:27:43 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:00.851 14:27:43 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 -r /var/tmp/spdk8.sock 00:08:00.851 [2024-10-01 14:27:43.319614] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:00.851 [2024-10-01 14:27:43.319711] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid703503 ] 00:08:00.851 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.110 [2024-10-01 14:27:43.630913] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.368 [2024-10-01 14:27:43.719220] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:01.368 [2024-10-01 14:27:43.719364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.368 [2024-10-01 14:27:43.777986] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.368 [2024-10-01 14:27:43.794205] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:08:01.368 INFO: Running with entropic power schedule (0xFF, 100). 00:08:01.368 INFO: Seed: 862096520 00:08:01.368 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:01.368 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:01.368 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:08:01.368 INFO: A corpus is not provided, starting from an empty corpus 00:08:01.368 [2024-10-01 14:27:43.849544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.368 [2024-10-01 14:27:43.849574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.368 #2 INITED cov: 11557 ft: 11558 corp: 1/1b exec/s: 0 rss: 67Mb 00:08:01.368 [2024-10-01 14:27:43.879521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.368 [2024-10-01 14:27:43.879548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.626 #3 NEW cov: 11670 ft: 12165 corp: 2/2b lim: 5 exec/s: 0 rss: 68Mb L: 1/1 MS: 1 CopyPart- 00:08:01.626 [2024-10-01 14:27:43.929650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.626 [2024-10-01 14:27:43.929675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.626 #4 NEW cov: 11676 ft: 12363 corp: 3/3b lim: 5 exec/s: 0 rss: 68Mb L: 1/1 MS: 1 ChangeBit- 00:08:01.626 [2024-10-01 14:27:43.969771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.626 [2024-10-01 14:27:43.969796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.626 #5 NEW cov: 11761 ft: 12658 corp: 4/4b lim: 5 exec/s: 0 rss: 68Mb L: 1/1 MS: 1 ShuffleBytes- 00:08:01.626 [2024-10-01 14:27:44.010041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.626 [2024-10-01 14:27:44.010066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.626 [2024-10-01 14:27:44.010120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.626 [2024-10-01 14:27:44.010133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.626 #6 NEW cov: 11761 ft: 13438 corp: 5/6b lim: 5 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 InsertByte- 00:08:01.626 [2024-10-01 14:27:44.050478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.626 [2024-10-01 14:27:44.050502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.626 [2024-10-01 14:27:44.050573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.626 [2024-10-01 14:27:44.050588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.626 [2024-10-01 14:27:44.050641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.626 [2024-10-01 14:27:44.050655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:01.626 [2024-10-01 14:27:44.050711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.626 [2024-10-01 14:27:44.050730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:01.626 #7 NEW cov: 11761 ft: 13816 corp: 6/10b lim: 5 exec/s: 0 rss: 68Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:08:01.626 [2024-10-01 14:27:44.100148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.626 [2024-10-01 14:27:44.100173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.626 #8 NEW cov: 11761 ft: 13888 corp: 7/11b lim: 5 exec/s: 0 rss: 68Mb L: 1/4 MS: 1 ShuffleBytes- 00:08:01.626 [2024-10-01 14:27:44.140383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.626 [2024-10-01 14:27:44.140409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.627 [2024-10-01 14:27:44.140464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.627 [2024-10-01 14:27:44.140477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.885 #9 NEW cov: 11761 ft: 13916 corp: 8/13b lim: 5 exec/s: 0 rss: 68Mb L: 2/4 MS: 1 CopyPart- 00:08:01.885 [2024-10-01 14:27:44.180477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.885 [2024-10-01 14:27:44.180502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.885 [2024-10-01 14:27:44.180558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.885 [2024-10-01 14:27:44.180571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.885 #10 NEW cov: 11761 ft: 13954 corp: 9/15b lim: 5 exec/s: 0 rss: 68Mb L: 2/4 MS: 1 InsertByte- 00:08:01.885 [2024-10-01 14:27:44.220951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.885 [2024-10-01 14:27:44.220975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.885 [2024-10-01 14:27:44.221046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.885 [2024-10-01 14:27:44.221060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.885 [2024-10-01 14:27:44.221115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.885 [2024-10-01 14:27:44.221129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:01.885 [2024-10-01 14:27:44.221184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.885 [2024-10-01 14:27:44.221197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:01.885 #11 NEW cov: 11761 ft: 14023 corp: 10/19b lim: 5 exec/s: 0 rss: 68Mb L: 4/4 MS: 1 CopyPart- 00:08:01.885 [2024-10-01 14:27:44.271124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.885 [2024-10-01 14:27:44.271151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.885 [2024-10-01 14:27:44.271224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.885 [2024-10-01 14:27:44.271238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.885 [2024-10-01 14:27:44.271293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.885 [2024-10-01 14:27:44.271306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:01.885 [2024-10-01 14:27:44.271362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.885 [2024-10-01 14:27:44.271376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:01.885 #12 NEW cov: 11761 ft: 14053 corp: 11/23b lim: 5 exec/s: 0 rss: 68Mb L: 4/4 MS: 1 ChangeBit- 00:08:01.885 [2024-10-01 14:27:44.311373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.885 [2024-10-01 14:27:44.311397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.885 [2024-10-01 14:27:44.311446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.885 [2024-10-01 14:27:44.311459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.885 [2024-10-01 14:27:44.311512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.885 [2024-10-01 14:27:44.311525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:01.885 [2024-10-01 14:27:44.311578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.885 [2024-10-01 14:27:44.311591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:01.885 [2024-10-01 14:27:44.311643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.885 [2024-10-01 14:27:44.311656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:01.885 #13 NEW cov: 11761 ft: 14201 corp: 12/28b lim: 5 exec/s: 0 rss: 68Mb L: 5/5 MS: 1 CrossOver- 00:08:01.885 [2024-10-01 14:27:44.361531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.885 [2024-10-01 14:27:44.361555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.885 [2024-10-01 14:27:44.361612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.885 [2024-10-01 14:27:44.361626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.885 [2024-10-01 14:27:44.361678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.885 [2024-10-01 14:27:44.361694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:01.885 [2024-10-01 14:27:44.361765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.885 [2024-10-01 14:27:44.361778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:01.885 [2024-10-01 14:27:44.361832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.886 [2024-10-01 14:27:44.361845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:01.886 #14 NEW cov: 11761 ft: 14223 corp: 13/33b lim: 5 exec/s: 0 rss: 68Mb L: 5/5 MS: 1 CrossOver- 00:08:02.144 [2024-10-01 14:27:44.411545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.144 [2024-10-01 14:27:44.411570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.144 [2024-10-01 14:27:44.411627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.144 [2024-10-01 14:27:44.411641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.144 [2024-10-01 14:27:44.411695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.144 [2024-10-01 14:27:44.411708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.144 [2024-10-01 14:27:44.411767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.144 [2024-10-01 14:27:44.411781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.144 #15 NEW cov: 11761 ft: 14253 corp: 14/37b lim: 5 exec/s: 0 rss: 68Mb L: 4/5 MS: 1 CMP- DE: "\001\000\000\014"- 00:08:02.144 [2024-10-01 14:27:44.461821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.144 [2024-10-01 14:27:44.461847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.144 [2024-10-01 14:27:44.461903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.144 [2024-10-01 14:27:44.461917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.144 [2024-10-01 14:27:44.461974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.144 [2024-10-01 14:27:44.461988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.144 [2024-10-01 14:27:44.462046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.144 [2024-10-01 14:27:44.462059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.144 [2024-10-01 14:27:44.462114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.144 [2024-10-01 14:27:44.462130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:02.144 #16 NEW cov: 11761 ft: 14271 corp: 15/42b lim: 5 exec/s: 0 rss: 68Mb L: 5/5 MS: 1 ChangeBit- 00:08:02.144 [2024-10-01 14:27:44.511625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.144 [2024-10-01 14:27:44.511649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.145 [2024-10-01 14:27:44.511725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.145 [2024-10-01 14:27:44.511739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.145 [2024-10-01 14:27:44.511793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.145 [2024-10-01 14:27:44.511806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.145 #17 NEW cov: 11761 ft: 14517 corp: 16/45b lim: 5 exec/s: 0 rss: 68Mb L: 3/5 MS: 1 EraseBytes- 00:08:02.145 [2024-10-01 14:27:44.561526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.145 [2024-10-01 14:27:44.561552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.145 #18 NEW cov: 11761 ft: 14540 corp: 17/46b lim: 5 exec/s: 0 rss: 68Mb L: 1/5 MS: 1 CrossOver- 00:08:02.145 [2024-10-01 14:27:44.601729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.145 [2024-10-01 14:27:44.601754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.145 [2024-10-01 14:27:44.601810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.145 [2024-10-01 14:27:44.601823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.145 #19 NEW cov: 11761 ft: 14569 corp: 18/48b lim: 5 exec/s: 0 rss: 68Mb L: 2/5 MS: 1 ChangeBit- 00:08:02.145 [2024-10-01 14:27:44.641790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.145 [2024-10-01 14:27:44.641813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.145 [2024-10-01 14:27:44.641865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.145 [2024-10-01 14:27:44.641879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.145 #20 NEW cov: 11761 ft: 14642 corp: 19/50b lim: 5 exec/s: 0 rss: 68Mb L: 2/5 MS: 1 ChangeBit- 00:08:02.403 [2024-10-01 14:27:44.682366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.403 [2024-10-01 14:27:44.682391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.403 [2024-10-01 14:27:44.682447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.403 [2024-10-01 14:27:44.682463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.403 [2024-10-01 14:27:44.682516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.403 [2024-10-01 14:27:44.682530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.403 [2024-10-01 14:27:44.682582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.403 [2024-10-01 14:27:44.682595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.403 [2024-10-01 14:27:44.682648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.403 [2024-10-01 14:27:44.682661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:02.403 #21 NEW cov: 11761 ft: 14730 corp: 20/55b lim: 5 exec/s: 0 rss: 68Mb L: 5/5 MS: 1 ChangeBinInt- 00:08:02.403 [2024-10-01 14:27:44.722531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.403 [2024-10-01 14:27:44.722555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.404 [2024-10-01 14:27:44.722610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.404 [2024-10-01 14:27:44.722623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.404 [2024-10-01 14:27:44.722693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.404 [2024-10-01 14:27:44.722706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.404 [2024-10-01 14:27:44.722759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.404 [2024-10-01 14:27:44.722773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.404 [2024-10-01 14:27:44.722827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.404 [2024-10-01 14:27:44.722840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:02.662 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:02.662 #22 NEW cov: 11784 ft: 14782 corp: 21/60b lim: 5 exec/s: 22 rss: 69Mb L: 5/5 MS: 1 CrossOver- 00:08:02.662 [2024-10-01 14:27:45.022933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.662 [2024-10-01 14:27:45.022971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.663 [2024-10-01 14:27:45.023044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.663 [2024-10-01 14:27:45.023059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.663 #23 NEW cov: 11784 ft: 14809 corp: 22/62b lim: 5 exec/s: 23 rss: 69Mb L: 2/5 MS: 1 CrossOver- 00:08:02.663 [2024-10-01 14:27:45.073515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.663 [2024-10-01 14:27:45.073543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.663 [2024-10-01 14:27:45.073615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.663 [2024-10-01 14:27:45.073629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.663 [2024-10-01 14:27:45.073683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.663 [2024-10-01 14:27:45.073697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.663 [2024-10-01 14:27:45.073762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.663 [2024-10-01 14:27:45.073775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.663 [2024-10-01 14:27:45.073831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.663 [2024-10-01 14:27:45.073845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:02.663 #24 NEW cov: 11784 ft: 14876 corp: 23/67b lim: 5 exec/s: 24 rss: 69Mb L: 5/5 MS: 1 ChangeBinInt- 00:08:02.663 [2024-10-01 14:27:45.113130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.663 [2024-10-01 14:27:45.113156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.663 [2024-10-01 14:27:45.113214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.663 [2024-10-01 14:27:45.113228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.663 #25 NEW cov: 11784 ft: 14920 corp: 24/69b lim: 5 exec/s: 25 rss: 69Mb L: 2/5 MS: 1 CrossOver- 00:08:02.663 [2024-10-01 14:27:45.153677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.663 [2024-10-01 14:27:45.153704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.663 [2024-10-01 14:27:45.153781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.663 [2024-10-01 14:27:45.153796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.663 [2024-10-01 14:27:45.153853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.663 [2024-10-01 14:27:45.153867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.663 [2024-10-01 14:27:45.153925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.663 [2024-10-01 14:27:45.153938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.663 [2024-10-01 14:27:45.153999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.663 [2024-10-01 14:27:45.154012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:02.663 #26 NEW cov: 11784 ft: 14950 corp: 25/74b lim: 5 exec/s: 26 rss: 69Mb L: 5/5 MS: 1 CopyPart- 00:08:02.922 [2024-10-01 14:27:45.193824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.922 [2024-10-01 14:27:45.193850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.922 [2024-10-01 14:27:45.193908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.922 [2024-10-01 14:27:45.193921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.922 [2024-10-01 14:27:45.193977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.922 [2024-10-01 14:27:45.193991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.922 [2024-10-01 14:27:45.194049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.922 [2024-10-01 14:27:45.194063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.922 [2024-10-01 14:27:45.194117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.922 [2024-10-01 14:27:45.194130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:02.922 #27 NEW cov: 11784 ft: 14969 corp: 26/79b lim: 5 exec/s: 27 rss: 70Mb L: 5/5 MS: 1 ChangeBit- 00:08:02.922 [2024-10-01 14:27:45.244004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.922 [2024-10-01 14:27:45.244030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.922 [2024-10-01 14:27:45.244105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.922 [2024-10-01 14:27:45.244119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.922 [2024-10-01 14:27:45.244178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.922 [2024-10-01 14:27:45.244192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.922 [2024-10-01 14:27:45.244249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.922 [2024-10-01 14:27:45.244262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.922 [2024-10-01 14:27:45.244319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.922 [2024-10-01 14:27:45.244332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:02.922 #28 NEW cov: 11784 ft: 14971 corp: 27/84b lim: 5 exec/s: 28 rss: 70Mb L: 5/5 MS: 1 CrossOver- 00:08:02.922 [2024-10-01 14:27:45.293961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.922 [2024-10-01 14:27:45.293986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.922 [2024-10-01 14:27:45.294044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.922 [2024-10-01 14:27:45.294057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.922 [2024-10-01 14:27:45.294129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.922 [2024-10-01 14:27:45.294143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.922 [2024-10-01 14:27:45.294201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.922 [2024-10-01 14:27:45.294214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.922 #29 NEW cov: 11784 ft: 14982 corp: 28/88b lim: 5 exec/s: 29 rss: 70Mb L: 4/5 MS: 1 ChangeBit- 00:08:02.922 [2024-10-01 14:27:45.333902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.922 [2024-10-01 14:27:45.333926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.922 [2024-10-01 14:27:45.333982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.922 [2024-10-01 14:27:45.333996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.922 [2024-10-01 14:27:45.334054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.922 [2024-10-01 14:27:45.334067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.922 #30 NEW cov: 11784 ft: 14993 corp: 29/91b lim: 5 exec/s: 30 rss: 70Mb L: 3/5 MS: 1 EraseBytes- 00:08:02.922 [2024-10-01 14:27:45.383890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.922 [2024-10-01 14:27:45.383915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.922 [2024-10-01 14:27:45.383973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.922 [2024-10-01 14:27:45.383986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.922 #31 NEW cov: 11784 ft: 15112 corp: 30/93b lim: 5 exec/s: 31 rss: 70Mb L: 2/5 MS: 1 ShuffleBytes- 00:08:02.922 [2024-10-01 14:27:45.434491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.922 [2024-10-01 14:27:45.434516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.922 [2024-10-01 14:27:45.434574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.922 [2024-10-01 14:27:45.434591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.922 [2024-10-01 14:27:45.434648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.922 [2024-10-01 14:27:45.434661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.922 [2024-10-01 14:27:45.434722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.922 [2024-10-01 14:27:45.434736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.922 [2024-10-01 14:27:45.434791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.922 [2024-10-01 14:27:45.434804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:03.181 #32 NEW cov: 11784 ft: 15216 corp: 31/98b lim: 5 exec/s: 32 rss: 70Mb L: 5/5 MS: 1 PersAutoDict- DE: "\001\000\000\014"- 00:08:03.181 [2024-10-01 14:27:45.474015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.181 [2024-10-01 14:27:45.474040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.181 #33 NEW cov: 11784 ft: 15228 corp: 32/99b lim: 5 exec/s: 33 rss: 70Mb L: 1/5 MS: 1 ChangeBit- 00:08:03.181 [2024-10-01 14:27:45.514783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.181 [2024-10-01 14:27:45.514807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.181 [2024-10-01 14:27:45.514863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.181 [2024-10-01 14:27:45.514877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.181 [2024-10-01 14:27:45.514934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.181 [2024-10-01 14:27:45.514948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.181 [2024-10-01 14:27:45.515004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.181 [2024-10-01 14:27:45.515018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.181 [2024-10-01 14:27:45.515075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.181 [2024-10-01 14:27:45.515089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:03.181 #34 NEW cov: 11784 ft: 15236 corp: 33/104b lim: 5 exec/s: 34 rss: 70Mb L: 5/5 MS: 1 ChangeBinInt- 00:08:03.181 [2024-10-01 14:27:45.564387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.181 [2024-10-01 14:27:45.564412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.181 [2024-10-01 14:27:45.564485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.181 [2024-10-01 14:27:45.564503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.181 #35 NEW cov: 11784 ft: 15274 corp: 34/106b lim: 5 exec/s: 35 rss: 70Mb L: 2/5 MS: 1 ShuffleBytes- 00:08:03.181 [2024-10-01 14:27:45.614534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.181 [2024-10-01 14:27:45.614559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.181 [2024-10-01 14:27:45.614634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.181 [2024-10-01 14:27:45.614648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.181 #36 NEW cov: 11784 ft: 15278 corp: 35/108b lim: 5 exec/s: 36 rss: 70Mb L: 2/5 MS: 1 CrossOver- 00:08:03.181 [2024-10-01 14:27:45.654641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.181 [2024-10-01 14:27:45.654665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.181 [2024-10-01 14:27:45.654741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.181 [2024-10-01 14:27:45.654756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.181 #37 NEW cov: 11784 ft: 15338 corp: 36/110b lim: 5 exec/s: 37 rss: 70Mb L: 2/5 MS: 1 CopyPart- 00:08:03.181 [2024-10-01 14:27:45.694947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.181 [2024-10-01 14:27:45.694971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.181 [2024-10-01 14:27:45.695044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.181 [2024-10-01 14:27:45.695058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.181 [2024-10-01 14:27:45.695116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.181 [2024-10-01 14:27:45.695129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.440 #38 NEW cov: 11784 ft: 15358 corp: 37/113b lim: 5 exec/s: 38 rss: 70Mb L: 3/5 MS: 1 EraseBytes- 00:08:03.440 [2024-10-01 14:27:45.734873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.440 [2024-10-01 14:27:45.734897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.440 [2024-10-01 14:27:45.734954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.440 [2024-10-01 14:27:45.734968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.440 #39 NEW cov: 11784 ft: 15362 corp: 38/115b lim: 5 exec/s: 39 rss: 70Mb L: 2/5 MS: 1 CMP- DE: "\377\377"- 00:08:03.440 [2024-10-01 14:27:45.785350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.440 [2024-10-01 14:27:45.785378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.440 [2024-10-01 14:27:45.785436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.440 [2024-10-01 14:27:45.785450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.441 [2024-10-01 14:27:45.785506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.441 [2024-10-01 14:27:45.785519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.441 [2024-10-01 14:27:45.785576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.441 [2024-10-01 14:27:45.785589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.441 #40 NEW cov: 11784 ft: 15369 corp: 39/119b lim: 5 exec/s: 40 rss: 70Mb L: 4/5 MS: 1 ChangeBinInt- 00:08:03.441 [2024-10-01 14:27:45.825616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.441 [2024-10-01 14:27:45.825641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.441 [2024-10-01 14:27:45.825699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.441 [2024-10-01 14:27:45.825713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.441 [2024-10-01 14:27:45.825791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.441 [2024-10-01 14:27:45.825805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.441 [2024-10-01 14:27:45.825864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.441 [2024-10-01 14:27:45.825878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.441 [2024-10-01 14:27:45.825938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.441 [2024-10-01 14:27:45.825951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:03.441 #41 NEW cov: 11784 ft: 15384 corp: 40/124b lim: 5 exec/s: 20 rss: 70Mb L: 5/5 MS: 1 CopyPart- 00:08:03.441 #41 DONE cov: 11784 ft: 15384 corp: 40/124b lim: 5 exec/s: 20 rss: 70Mb 00:08:03.441 ###### Recommended dictionary. ###### 00:08:03.441 "\001\000\000\014" # Uses: 1 00:08:03.441 "\377\377" # Uses: 0 00:08:03.441 ###### End of recommended dictionary. ###### 00:08:03.441 Done 41 runs in 2 second(s) 00:08:03.700 14:27:45 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_8.conf 00:08:03.700 14:27:45 -- ../common.sh@72 -- # (( i++ )) 00:08:03.700 14:27:45 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:03.700 14:27:45 -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:08:03.700 14:27:45 -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:08:03.700 14:27:45 -- nvmf/run.sh@24 -- # local timen=1 00:08:03.700 14:27:45 -- nvmf/run.sh@25 -- # local core=0x1 00:08:03.700 14:27:45 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:08:03.700 14:27:45 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:08:03.700 14:27:45 -- nvmf/run.sh@29 -- # printf %02d 9 00:08:03.700 14:27:45 -- nvmf/run.sh@29 -- # port=4409 00:08:03.700 14:27:45 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:08:03.700 14:27:45 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:08:03.700 14:27:45 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:03.700 14:27:46 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 -r /var/tmp/spdk9.sock 00:08:03.700 [2024-10-01 14:27:46.029403] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:03.700 [2024-10-01 14:27:46.029483] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid703867 ] 00:08:03.700 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.958 [2024-10-01 14:27:46.340440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.958 [2024-10-01 14:27:46.427647] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:03.958 [2024-10-01 14:27:46.427795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.216 [2024-10-01 14:27:46.486636] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.216 [2024-10-01 14:27:46.502866] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:08:04.216 INFO: Running with entropic power schedule (0xFF, 100). 00:08:04.216 INFO: Seed: 3571108393 00:08:04.216 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:04.216 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:04.216 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:08:04.216 INFO: A corpus is not provided, starting from an empty corpus 00:08:04.216 [2024-10-01 14:27:46.579912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.216 [2024-10-01 14:27:46.579959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.216 #2 INITED cov: 11557 ft: 11531 corp: 1/1b exec/s: 0 rss: 68Mb 00:08:04.216 [2024-10-01 14:27:46.630043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.216 [2024-10-01 14:27:46.630077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.216 [2024-10-01 14:27:46.630203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.216 [2024-10-01 14:27:46.630222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.216 #3 NEW cov: 11670 ft: 12865 corp: 2/3b lim: 5 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 CopyPart- 00:08:04.216 [2024-10-01 14:27:46.690005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.216 [2024-10-01 14:27:46.690035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.216 [2024-10-01 14:27:46.690164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.216 [2024-10-01 14:27:46.690182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.216 #4 NEW cov: 11676 ft: 13192 corp: 3/5b lim: 5 exec/s: 0 rss: 69Mb L: 2/2 MS: 1 CrossOver- 00:08:04.474 [2024-10-01 14:27:46.740792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.474 [2024-10-01 14:27:46.740822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.474 [2024-10-01 14:27:46.740928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.474 [2024-10-01 14:27:46.740948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.474 [2024-10-01 14:27:46.741059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.474 [2024-10-01 14:27:46.741076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.474 [2024-10-01 14:27:46.741189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.474 [2024-10-01 14:27:46.741206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:04.474 #5 NEW cov: 11761 ft: 13707 corp: 4/9b lim: 5 exec/s: 0 rss: 69Mb L: 4/4 MS: 1 CopyPart- 00:08:04.474 [2024-10-01 14:27:46.800363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.474 [2024-10-01 14:27:46.800391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.474 [2024-10-01 14:27:46.800500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.474 [2024-10-01 14:27:46.800520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.474 #6 NEW cov: 11761 ft: 13777 corp: 5/11b lim: 5 exec/s: 0 rss: 69Mb L: 2/4 MS: 1 ShuffleBytes- 00:08:04.474 [2024-10-01 14:27:46.850541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.474 [2024-10-01 14:27:46.850570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.475 [2024-10-01 14:27:46.850680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.475 [2024-10-01 14:27:46.850698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.475 #7 NEW cov: 11761 ft: 13889 corp: 6/13b lim: 5 exec/s: 0 rss: 69Mb L: 2/4 MS: 1 ChangeBit- 00:08:04.475 [2024-10-01 14:27:46.910736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.475 [2024-10-01 14:27:46.910765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.475 [2024-10-01 14:27:46.910882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.475 [2024-10-01 14:27:46.910899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.475 #8 NEW cov: 11761 ft: 14043 corp: 7/15b lim: 5 exec/s: 0 rss: 69Mb L: 2/4 MS: 1 ChangeBit- 00:08:04.475 [2024-10-01 14:27:46.960865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.475 [2024-10-01 14:27:46.960896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.475 [2024-10-01 14:27:46.961009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.475 [2024-10-01 14:27:46.961026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.475 #9 NEW cov: 11761 ft: 14061 corp: 8/17b lim: 5 exec/s: 0 rss: 69Mb L: 2/4 MS: 1 ShuffleBytes- 00:08:04.733 [2024-10-01 14:27:47.021116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.733 [2024-10-01 14:27:47.021145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.733 [2024-10-01 14:27:47.021254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.733 [2024-10-01 14:27:47.021271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.733 #10 NEW cov: 11761 ft: 14083 corp: 9/19b lim: 5 exec/s: 0 rss: 69Mb L: 2/4 MS: 1 ChangeByte- 00:08:04.733 [2024-10-01 14:27:47.071023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.733 [2024-10-01 14:27:47.071051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.733 #11 NEW cov: 11761 ft: 14150 corp: 10/20b lim: 5 exec/s: 0 rss: 69Mb L: 1/4 MS: 1 EraseBytes- 00:08:04.733 [2024-10-01 14:27:47.121827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.733 [2024-10-01 14:27:47.121858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.733 [2024-10-01 14:27:47.121986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.733 [2024-10-01 14:27:47.122006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.733 [2024-10-01 14:27:47.122113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.733 [2024-10-01 14:27:47.122132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.733 #12 NEW cov: 11761 ft: 14356 corp: 11/23b lim: 5 exec/s: 0 rss: 69Mb L: 3/4 MS: 1 CrossOver- 00:08:04.733 [2024-10-01 14:27:47.171409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.733 [2024-10-01 14:27:47.171438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.733 #13 NEW cov: 11761 ft: 14417 corp: 12/24b lim: 5 exec/s: 0 rss: 69Mb L: 1/4 MS: 1 ChangeByte- 00:08:04.733 [2024-10-01 14:27:47.231933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.733 [2024-10-01 14:27:47.231962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.733 [2024-10-01 14:27:47.232081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.733 [2024-10-01 14:27:47.232103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.992 #14 NEW cov: 11761 ft: 14428 corp: 13/26b lim: 5 exec/s: 0 rss: 69Mb L: 2/4 MS: 1 ShuffleBytes- 00:08:04.992 [2024-10-01 14:27:47.292418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.992 [2024-10-01 14:27:47.292446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.992 [2024-10-01 14:27:47.292557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.992 [2024-10-01 14:27:47.292574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.992 [2024-10-01 14:27:47.292678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.992 [2024-10-01 14:27:47.292696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.992 #15 NEW cov: 11761 ft: 14444 corp: 14/29b lim: 5 exec/s: 0 rss: 69Mb L: 3/4 MS: 1 InsertByte- 00:08:04.992 [2024-10-01 14:27:47.352871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.992 [2024-10-01 14:27:47.352903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.992 [2024-10-01 14:27:47.353018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.992 [2024-10-01 14:27:47.353038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.992 [2024-10-01 14:27:47.353153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.992 [2024-10-01 14:27:47.353172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.992 [2024-10-01 14:27:47.353280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.992 [2024-10-01 14:27:47.353299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:04.992 #16 NEW cov: 11761 ft: 14504 corp: 15/33b lim: 5 exec/s: 0 rss: 69Mb L: 4/4 MS: 1 CopyPart- 00:08:04.992 [2024-10-01 14:27:47.412092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.992 [2024-10-01 14:27:47.412120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.250 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:05.250 #17 NEW cov: 11784 ft: 14593 corp: 16/34b lim: 5 exec/s: 17 rss: 70Mb L: 1/4 MS: 1 ShuffleBytes- 00:08:05.250 [2024-10-01 14:27:47.723644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.250 [2024-10-01 14:27:47.723683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.250 #18 NEW cov: 11784 ft: 14696 corp: 17/35b lim: 5 exec/s: 18 rss: 70Mb L: 1/4 MS: 1 ChangeBit- 00:08:05.509 [2024-10-01 14:27:47.775212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.509 [2024-10-01 14:27:47.775244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.509 [2024-10-01 14:27:47.775337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.509 [2024-10-01 14:27:47.775354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.509 [2024-10-01 14:27:47.775440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.509 [2024-10-01 14:27:47.775455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.509 [2024-10-01 14:27:47.775551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.509 [2024-10-01 14:27:47.775565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:05.509 #19 NEW cov: 11784 ft: 14756 corp: 18/39b lim: 5 exec/s: 19 rss: 70Mb L: 4/4 MS: 1 ShuffleBytes- 00:08:05.509 [2024-10-01 14:27:47.835249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.509 [2024-10-01 14:27:47.835277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.509 [2024-10-01 14:27:47.835372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.509 [2024-10-01 14:27:47.835388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.509 [2024-10-01 14:27:47.835483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.509 [2024-10-01 14:27:47.835498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.509 #20 NEW cov: 11784 ft: 14798 corp: 19/42b lim: 5 exec/s: 20 rss: 70Mb L: 3/4 MS: 1 CrossOver- 00:08:05.509 [2024-10-01 14:27:47.895136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.509 [2024-10-01 14:27:47.895162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.509 [2024-10-01 14:27:47.895250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.509 [2024-10-01 14:27:47.895265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.509 #21 NEW cov: 11784 ft: 14830 corp: 20/44b lim: 5 exec/s: 21 rss: 70Mb L: 2/4 MS: 1 CopyPart- 00:08:05.509 [2024-10-01 14:27:47.945169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.509 [2024-10-01 14:27:47.945194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.509 #22 NEW cov: 11784 ft: 14840 corp: 21/45b lim: 5 exec/s: 22 rss: 70Mb L: 1/4 MS: 1 CrossOver- 00:08:05.509 [2024-10-01 14:27:47.995350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.509 [2024-10-01 14:27:47.995374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.509 #23 NEW cov: 11784 ft: 14877 corp: 22/46b lim: 5 exec/s: 23 rss: 70Mb L: 1/4 MS: 1 ShuffleBytes- 00:08:05.768 [2024-10-01 14:27:48.046639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.768 [2024-10-01 14:27:48.046665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.768 [2024-10-01 14:27:48.046759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.768 [2024-10-01 14:27:48.046776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.768 [2024-10-01 14:27:48.046853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.768 [2024-10-01 14:27:48.046868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.768 [2024-10-01 14:27:48.046964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.768 [2024-10-01 14:27:48.046981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:05.768 #24 NEW cov: 11784 ft: 14903 corp: 23/50b lim: 5 exec/s: 24 rss: 70Mb L: 4/4 MS: 1 CopyPart- 00:08:05.768 [2024-10-01 14:27:48.096642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.768 [2024-10-01 14:27:48.096667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.768 [2024-10-01 14:27:48.096775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.768 [2024-10-01 14:27:48.096791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.768 [2024-10-01 14:27:48.096884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.768 [2024-10-01 14:27:48.096900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.768 #25 NEW cov: 11784 ft: 14909 corp: 24/53b lim: 5 exec/s: 25 rss: 70Mb L: 3/4 MS: 1 ChangeBit- 00:08:05.768 [2024-10-01 14:27:48.156589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.768 [2024-10-01 14:27:48.156615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.768 [2024-10-01 14:27:48.156699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.768 [2024-10-01 14:27:48.156714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.768 #26 NEW cov: 11784 ft: 14945 corp: 25/55b lim: 5 exec/s: 26 rss: 70Mb L: 2/4 MS: 1 ShuffleBytes- 00:08:05.768 [2024-10-01 14:27:48.206580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.768 [2024-10-01 14:27:48.206605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.768 #27 NEW cov: 11784 ft: 14968 corp: 26/56b lim: 5 exec/s: 27 rss: 70Mb L: 1/4 MS: 1 ChangeByte- 00:08:05.768 [2024-10-01 14:27:48.257278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.768 [2024-10-01 14:27:48.257309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.769 [2024-10-01 14:27:48.257407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.769 [2024-10-01 14:27:48.257421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.769 #28 NEW cov: 11784 ft: 14979 corp: 27/58b lim: 5 exec/s: 28 rss: 71Mb L: 2/4 MS: 1 CopyPart- 00:08:06.028 [2024-10-01 14:27:48.318575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.028 [2024-10-01 14:27:48.318600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.028 [2024-10-01 14:27:48.318694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.028 [2024-10-01 14:27:48.318710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.028 [2024-10-01 14:27:48.318799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.028 [2024-10-01 14:27:48.318815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.028 [2024-10-01 14:27:48.318908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.028 [2024-10-01 14:27:48.318923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.028 #29 NEW cov: 11784 ft: 14992 corp: 28/62b lim: 5 exec/s: 29 rss: 71Mb L: 4/4 MS: 1 ShuffleBytes- 00:08:06.028 [2024-10-01 14:27:48.369297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.028 [2024-10-01 14:27:48.369323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.028 [2024-10-01 14:27:48.369427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.028 [2024-10-01 14:27:48.369443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.028 [2024-10-01 14:27:48.369530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.028 [2024-10-01 14:27:48.369545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.028 [2024-10-01 14:27:48.369632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.028 [2024-10-01 14:27:48.369647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.028 [2024-10-01 14:27:48.369731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.028 [2024-10-01 14:27:48.369745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:06.028 #30 NEW cov: 11784 ft: 15061 corp: 29/67b lim: 5 exec/s: 30 rss: 71Mb L: 5/5 MS: 1 InsertByte- 00:08:06.028 [2024-10-01 14:27:48.419900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.028 [2024-10-01 14:27:48.419928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.028 [2024-10-01 14:27:48.420022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.028 [2024-10-01 14:27:48.420038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.028 [2024-10-01 14:27:48.420128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.028 [2024-10-01 14:27:48.420143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.028 [2024-10-01 14:27:48.420234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.028 [2024-10-01 14:27:48.420251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.028 [2024-10-01 14:27:48.420341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.028 [2024-10-01 14:27:48.420356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:06.028 #31 NEW cov: 11784 ft: 15123 corp: 30/72b lim: 5 exec/s: 31 rss: 71Mb L: 5/5 MS: 1 InsertByte- 00:08:06.028 [2024-10-01 14:27:48.469731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.028 [2024-10-01 14:27:48.469756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.028 [2024-10-01 14:27:48.469848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.028 [2024-10-01 14:27:48.469863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.028 [2024-10-01 14:27:48.469946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.028 [2024-10-01 14:27:48.469961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.028 [2024-10-01 14:27:48.470050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.028 [2024-10-01 14:27:48.470068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.028 #32 NEW cov: 11784 ft: 15142 corp: 31/76b lim: 5 exec/s: 32 rss: 71Mb L: 4/5 MS: 1 InsertByte- 00:08:06.028 [2024-10-01 14:27:48.529354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.028 [2024-10-01 14:27:48.529381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.028 [2024-10-01 14:27:48.529482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.028 [2024-10-01 14:27:48.529497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.028 #33 NEW cov: 11784 ft: 15164 corp: 32/78b lim: 5 exec/s: 16 rss: 71Mb L: 2/5 MS: 1 CrossOver- 00:08:06.028 #33 DONE cov: 11784 ft: 15164 corp: 32/78b lim: 5 exec/s: 16 rss: 71Mb 00:08:06.028 Done 33 runs in 2 second(s) 00:08:06.287 14:27:48 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_9.conf 00:08:06.287 14:27:48 -- ../common.sh@72 -- # (( i++ )) 00:08:06.288 14:27:48 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:06.288 14:27:48 -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:08:06.288 14:27:48 -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:08:06.288 14:27:48 -- nvmf/run.sh@24 -- # local timen=1 00:08:06.288 14:27:48 -- nvmf/run.sh@25 -- # local core=0x1 00:08:06.288 14:27:48 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:08:06.288 14:27:48 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:08:06.288 14:27:48 -- nvmf/run.sh@29 -- # printf %02d 10 00:08:06.288 14:27:48 -- nvmf/run.sh@29 -- # port=4410 00:08:06.288 14:27:48 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:08:06.288 14:27:48 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:08:06.288 14:27:48 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:06.288 14:27:48 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 -r /var/tmp/spdk10.sock 00:08:06.288 [2024-10-01 14:27:48.727620] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:06.288 [2024-10-01 14:27:48.727712] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid704235 ] 00:08:06.288 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.546 [2024-10-01 14:27:49.040339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.803 [2024-10-01 14:27:49.127828] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:06.803 [2024-10-01 14:27:49.127975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.803 [2024-10-01 14:27:49.186541] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.803 [2024-10-01 14:27:49.202763] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:08:06.804 INFO: Running with entropic power schedule (0xFF, 100). 00:08:06.804 INFO: Seed: 1977119784 00:08:06.804 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:06.804 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:06.804 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:08:06.804 INFO: A corpus is not provided, starting from an empty corpus 00:08:06.804 #2 INITED exec/s: 0 rss: 61Mb 00:08:06.804 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:06.804 This may also happen if the target rejected all inputs we tried so far 00:08:06.804 [2024-10-01 14:27:49.280300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.804 [2024-10-01 14:27:49.280347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.804 [2024-10-01 14:27:49.280499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.804 [2024-10-01 14:27:49.280522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.804 [2024-10-01 14:27:49.280664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.804 [2024-10-01 14:27:49.280688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.061 NEW_FUNC[1/670]: 0x447688 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:08:07.061 NEW_FUNC[2/670]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:07.061 #19 NEW cov: 11580 ft: 11581 corp: 2/28b lim: 40 exec/s: 0 rss: 68Mb L: 27/27 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:07.319 [2024-10-01 14:27:49.601173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0acfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.320 [2024-10-01 14:27:49.601217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.320 [2024-10-01 14:27:49.601326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.320 [2024-10-01 14:27:49.601346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.320 [2024-10-01 14:27:49.601447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.320 [2024-10-01 14:27:49.601465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.320 [2024-10-01 14:27:49.601559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.320 [2024-10-01 14:27:49.601577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:07.320 #20 NEW cov: 11693 ft: 12664 corp: 3/67b lim: 40 exec/s: 0 rss: 68Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:08:07.320 [2024-10-01 14:27:49.651252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0acfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.320 [2024-10-01 14:27:49.651279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.320 [2024-10-01 14:27:49.651372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.320 [2024-10-01 14:27:49.651389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.320 [2024-10-01 14:27:49.651486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.320 [2024-10-01 14:27:49.651502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.320 [2024-10-01 14:27:49.651596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.320 [2024-10-01 14:27:49.651612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:07.320 #26 NEW cov: 11699 ft: 12801 corp: 4/106b lim: 40 exec/s: 0 rss: 68Mb L: 39/39 MS: 1 ShuffleBytes- 00:08:07.320 [2024-10-01 14:27:49.711423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0acfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.320 [2024-10-01 14:27:49.711449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.320 [2024-10-01 14:27:49.711554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.320 [2024-10-01 14:27:49.711572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.320 [2024-10-01 14:27:49.711670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.320 [2024-10-01 14:27:49.711685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.320 [2024-10-01 14:27:49.711776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.320 [2024-10-01 14:27:49.711790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:07.320 #27 NEW cov: 11784 ft: 13047 corp: 5/145b lim: 40 exec/s: 0 rss: 68Mb L: 39/39 MS: 1 ShuffleBytes- 00:08:07.320 [2024-10-01 14:27:49.771408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.320 [2024-10-01 14:27:49.771433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.320 [2024-10-01 14:27:49.771528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.320 [2024-10-01 14:27:49.771544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.320 [2024-10-01 14:27:49.771642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.320 [2024-10-01 14:27:49.771656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.320 #28 NEW cov: 11784 ft: 13081 corp: 6/172b lim: 40 exec/s: 0 rss: 68Mb L: 27/39 MS: 1 CopyPart- 00:08:07.320 [2024-10-01 14:27:49.831934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0acfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.320 [2024-10-01 14:27:49.831962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.320 [2024-10-01 14:27:49.832062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.320 [2024-10-01 14:27:49.832078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.320 [2024-10-01 14:27:49.832172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.320 [2024-10-01 14:27:49.832188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.320 [2024-10-01 14:27:49.832287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.320 [2024-10-01 14:27:49.832302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:07.579 #29 NEW cov: 11784 ft: 13185 corp: 7/211b lim: 40 exec/s: 0 rss: 68Mb L: 39/39 MS: 1 ChangeBinInt- 00:08:07.579 [2024-10-01 14:27:49.891940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0acfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.579 [2024-10-01 14:27:49.891968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.579 [2024-10-01 14:27:49.892063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.579 [2024-10-01 14:27:49.892081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.579 [2024-10-01 14:27:49.892188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.579 [2024-10-01 14:27:49.892202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.579 #30 NEW cov: 11784 ft: 13227 corp: 8/235b lim: 40 exec/s: 0 rss: 69Mb L: 24/39 MS: 1 EraseBytes- 00:08:07.579 [2024-10-01 14:27:49.952099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.579 [2024-10-01 14:27:49.952126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.579 [2024-10-01 14:27:49.952228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.579 [2024-10-01 14:27:49.952243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.579 [2024-10-01 14:27:49.952332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.579 [2024-10-01 14:27:49.952347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.579 #31 NEW cov: 11784 ft: 13290 corp: 9/262b lim: 40 exec/s: 0 rss: 69Mb L: 27/39 MS: 1 ChangeByte- 00:08:07.579 [2024-10-01 14:27:50.012682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:cf0acfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.579 [2024-10-01 14:27:50.012709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.579 [2024-10-01 14:27:50.012811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.579 [2024-10-01 14:27:50.012839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.579 [2024-10-01 14:27:50.012930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.579 [2024-10-01 14:27:50.012946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.579 [2024-10-01 14:27:50.013038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cacfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.579 [2024-10-01 14:27:50.013054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:07.579 #33 NEW cov: 11784 ft: 13361 corp: 10/298b lim: 40 exec/s: 0 rss: 69Mb L: 36/39 MS: 2 CrossOver-CrossOver- 00:08:07.579 [2024-10-01 14:27:50.062666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.579 [2024-10-01 14:27:50.062692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.579 [2024-10-01 14:27:50.062783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.579 [2024-10-01 14:27:50.062799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.579 [2024-10-01 14:27:50.062882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.579 [2024-10-01 14:27:50.062899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.579 #34 NEW cov: 11784 ft: 13427 corp: 11/325b lim: 40 exec/s: 0 rss: 69Mb L: 27/39 MS: 1 ShuffleBytes- 00:08:07.838 [2024-10-01 14:27:50.113508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0acfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.838 [2024-10-01 14:27:50.113533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.838 [2024-10-01 14:27:50.113629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.838 [2024-10-01 14:27:50.113645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.838 [2024-10-01 14:27:50.113740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.838 [2024-10-01 14:27:50.113756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.838 [2024-10-01 14:27:50.113852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.838 [2024-10-01 14:27:50.113868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:07.838 [2024-10-01 14:27:50.113966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.838 [2024-10-01 14:27:50.113982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:07.838 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:07.838 #35 NEW cov: 11807 ft: 13514 corp: 12/365b lim: 40 exec/s: 0 rss: 69Mb L: 40/40 MS: 1 CopyPart- 00:08:07.838 [2024-10-01 14:27:50.163215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.838 [2024-10-01 14:27:50.163240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.838 [2024-10-01 14:27:50.163343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.838 [2024-10-01 14:27:50.163359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.838 [2024-10-01 14:27:50.163453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.838 [2024-10-01 14:27:50.163469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.838 #36 NEW cov: 11807 ft: 13554 corp: 13/392b lim: 40 exec/s: 0 rss: 69Mb L: 27/40 MS: 1 ShuffleBytes- 00:08:07.838 [2024-10-01 14:27:50.223414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.838 [2024-10-01 14:27:50.223439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.838 [2024-10-01 14:27:50.223534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:26000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.838 [2024-10-01 14:27:50.223552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.839 [2024-10-01 14:27:50.223642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.839 [2024-10-01 14:27:50.223658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.839 #37 NEW cov: 11807 ft: 13647 corp: 14/419b lim: 40 exec/s: 0 rss: 69Mb L: 27/40 MS: 1 ShuffleBytes- 00:08:07.839 [2024-10-01 14:27:50.274088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:21ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.839 [2024-10-01 14:27:50.274117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.839 [2024-10-01 14:27:50.274207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.839 [2024-10-01 14:27:50.274223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.839 [2024-10-01 14:27:50.274313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.839 [2024-10-01 14:27:50.274330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.839 [2024-10-01 14:27:50.274421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.839 [2024-10-01 14:27:50.274437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:07.839 #42 NEW cov: 11807 ft: 13674 corp: 15/453b lim: 40 exec/s: 42 rss: 69Mb L: 34/40 MS: 5 ChangeByte-ShuffleBytes-ChangeByte-ShuffleBytes-InsertRepeatedBytes- 00:08:07.839 [2024-10-01 14:27:50.324261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:cf0acfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.839 [2024-10-01 14:27:50.324288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.839 [2024-10-01 14:27:50.324377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.839 [2024-10-01 14:27:50.324393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.839 [2024-10-01 14:27:50.324484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.839 [2024-10-01 14:27:50.324501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.839 [2024-10-01 14:27:50.324595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.839 [2024-10-01 14:27:50.324611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:07.839 #43 NEW cov: 11807 ft: 13682 corp: 16/492b lim: 40 exec/s: 43 rss: 69Mb L: 39/40 MS: 1 ShuffleBytes- 00:08:08.097 [2024-10-01 14:27:50.374591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0acfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.097 [2024-10-01 14:27:50.374619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.097 [2024-10-01 14:27:50.374713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cf7ecfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.097 [2024-10-01 14:27:50.374737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.097 [2024-10-01 14:27:50.374832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.097 [2024-10-01 14:27:50.374848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.097 [2024-10-01 14:27:50.374940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.097 [2024-10-01 14:27:50.374956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.097 #44 NEW cov: 11807 ft: 13691 corp: 17/531b lim: 40 exec/s: 44 rss: 69Mb L: 39/40 MS: 1 ChangeByte- 00:08:08.097 [2024-10-01 14:27:50.424540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0000ff cdw11:f6000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.097 [2024-10-01 14:27:50.424567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.097 [2024-10-01 14:27:50.424656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.097 [2024-10-01 14:27:50.424672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.097 [2024-10-01 14:27:50.424767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.097 [2024-10-01 14:27:50.424784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.097 #45 NEW cov: 11807 ft: 13755 corp: 18/558b lim: 40 exec/s: 45 rss: 69Mb L: 27/40 MS: 1 ChangeBinInt- 00:08:08.097 [2024-10-01 14:27:50.474238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a00000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.097 [2024-10-01 14:27:50.474265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.097 #46 NEW cov: 11807 ft: 14098 corp: 19/571b lim: 40 exec/s: 46 rss: 69Mb L: 13/40 MS: 1 CrossOver- 00:08:08.097 [2024-10-01 14:27:50.525291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.097 [2024-10-01 14:27:50.525318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.097 [2024-10-01 14:27:50.525405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:26000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.097 [2024-10-01 14:27:50.525421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.097 [2024-10-01 14:27:50.525511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.097 [2024-10-01 14:27:50.525527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.097 #47 NEW cov: 11807 ft: 14192 corp: 20/598b lim: 40 exec/s: 47 rss: 69Mb L: 27/40 MS: 1 CrossOver- 00:08:08.097 [2024-10-01 14:27:50.585536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.097 [2024-10-01 14:27:50.585561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.097 [2024-10-01 14:27:50.585661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000104 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.097 [2024-10-01 14:27:50.585678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.097 [2024-10-01 14:27:50.585766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.097 [2024-10-01 14:27:50.585781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.097 #48 NEW cov: 11807 ft: 14218 corp: 21/625b lim: 40 exec/s: 48 rss: 69Mb L: 27/40 MS: 1 ChangeBinInt- 00:08:08.356 [2024-10-01 14:27:50.636163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:21ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.356 [2024-10-01 14:27:50.636199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.356 [2024-10-01 14:27:50.636287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.356 [2024-10-01 14:27:50.636303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.356 [2024-10-01 14:27:50.636400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.356 [2024-10-01 14:27:50.636415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.356 [2024-10-01 14:27:50.636503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.356 [2024-10-01 14:27:50.636520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.356 #49 NEW cov: 11807 ft: 14228 corp: 22/663b lim: 40 exec/s: 49 rss: 70Mb L: 38/40 MS: 1 InsertRepeatedBytes- 00:08:08.356 [2024-10-01 14:27:50.696365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0acfcf38 cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.356 [2024-10-01 14:27:50.696390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.356 [2024-10-01 14:27:50.696480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.356 [2024-10-01 14:27:50.696496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.356 [2024-10-01 14:27:50.696598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.356 [2024-10-01 14:27:50.696613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.356 [2024-10-01 14:27:50.696707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.356 [2024-10-01 14:27:50.696726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.356 #50 NEW cov: 11807 ft: 14273 corp: 23/702b lim: 40 exec/s: 50 rss: 70Mb L: 39/40 MS: 1 ChangeByte- 00:08:08.356 [2024-10-01 14:27:50.747099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:cf0acfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.356 [2024-10-01 14:27:50.747130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.356 [2024-10-01 14:27:50.747220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.357 [2024-10-01 14:27:50.747236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.357 [2024-10-01 14:27:50.747327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.357 [2024-10-01 14:27:50.747342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.357 [2024-10-01 14:27:50.747427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.357 [2024-10-01 14:27:50.747442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.357 [2024-10-01 14:27:50.747528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.357 [2024-10-01 14:27:50.747542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:08.357 #51 NEW cov: 11807 ft: 14316 corp: 24/742b lim: 40 exec/s: 51 rss: 70Mb L: 40/40 MS: 1 CopyPart- 00:08:08.357 [2024-10-01 14:27:50.807173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:cf0acfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.357 [2024-10-01 14:27:50.807198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.357 [2024-10-01 14:27:50.807287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.357 [2024-10-01 14:27:50.807303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.357 [2024-10-01 14:27:50.807400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.357 [2024-10-01 14:27:50.807416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.357 [2024-10-01 14:27:50.807505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.357 [2024-10-01 14:27:50.807519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.357 #52 NEW cov: 11807 ft: 14319 corp: 25/781b lim: 40 exec/s: 52 rss: 70Mb L: 39/40 MS: 1 ShuffleBytes- 00:08:08.357 [2024-10-01 14:27:50.857133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.357 [2024-10-01 14:27:50.857158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.357 [2024-10-01 14:27:50.857244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.357 [2024-10-01 14:27:50.857260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.357 [2024-10-01 14:27:50.857356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.357 [2024-10-01 14:27:50.857370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.357 #53 NEW cov: 11807 ft: 14373 corp: 26/808b lim: 40 exec/s: 53 rss: 70Mb L: 27/40 MS: 1 ShuffleBytes- 00:08:08.616 [2024-10-01 14:27:50.907508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0acfcf38 cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.616 [2024-10-01 14:27:50.907535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.616 [2024-10-01 14:27:50.907626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.616 [2024-10-01 14:27:50.907641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.616 [2024-10-01 14:27:50.907733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.616 [2024-10-01 14:27:50.907761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.616 [2024-10-01 14:27:50.907848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.616 [2024-10-01 14:27:50.907863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.616 #54 NEW cov: 11807 ft: 14379 corp: 27/847b lim: 40 exec/s: 54 rss: 70Mb L: 39/40 MS: 1 CopyPart- 00:08:08.616 [2024-10-01 14:27:50.967407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.616 [2024-10-01 14:27:50.967435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.616 [2024-10-01 14:27:50.967537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000104 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.616 [2024-10-01 14:27:50.967553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.616 [2024-10-01 14:27:50.967652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.616 [2024-10-01 14:27:50.967667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.616 [2024-10-01 14:27:51.027884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.616 [2024-10-01 14:27:51.027912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.616 [2024-10-01 14:27:51.028010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.616 [2024-10-01 14:27:51.028026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.616 [2024-10-01 14:27:51.028119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.616 [2024-10-01 14:27:51.028135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.616 [2024-10-01 14:27:51.028230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:01040000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.616 [2024-10-01 14:27:51.028246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.616 #56 NEW cov: 11807 ft: 14394 corp: 28/884b lim: 40 exec/s: 56 rss: 70Mb L: 37/40 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:08.616 [2024-10-01 14:27:51.077794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000200 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.616 [2024-10-01 14:27:51.077825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.616 [2024-10-01 14:27:51.077934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:26000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.616 [2024-10-01 14:27:51.077949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.616 [2024-10-01 14:27:51.078042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.616 [2024-10-01 14:27:51.078057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.616 #57 NEW cov: 11807 ft: 14406 corp: 29/911b lim: 40 exec/s: 57 rss: 70Mb L: 27/40 MS: 1 ChangeBit- 00:08:08.616 [2024-10-01 14:27:51.138624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0acfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.616 [2024-10-01 14:27:51.138649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.616 [2024-10-01 14:27:51.138749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.616 [2024-10-01 14:27:51.138766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.616 [2024-10-01 14:27:51.138870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.616 [2024-10-01 14:27:51.138892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.616 [2024-10-01 14:27:51.138992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.616 [2024-10-01 14:27:51.139008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.616 [2024-10-01 14:27:51.139100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.616 [2024-10-01 14:27:51.139114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:08.875 #58 NEW cov: 11807 ft: 14434 corp: 30/951b lim: 40 exec/s: 58 rss: 70Mb L: 40/40 MS: 1 ShuffleBytes- 00:08:08.875 [2024-10-01 14:27:51.198231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.875 [2024-10-01 14:27:51.198256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.875 [2024-10-01 14:27:51.198364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:005d0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.875 [2024-10-01 14:27:51.198380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.875 [2024-10-01 14:27:51.198491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.875 [2024-10-01 14:27:51.198510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.875 #59 NEW cov: 11807 ft: 14451 corp: 31/978b lim: 40 exec/s: 59 rss: 70Mb L: 27/40 MS: 1 ChangeByte- 00:08:08.875 [2024-10-01 14:27:51.248896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:21ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.875 [2024-10-01 14:27:51.248920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.875 [2024-10-01 14:27:51.249022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.875 [2024-10-01 14:27:51.249038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.875 [2024-10-01 14:27:51.249133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.875 [2024-10-01 14:27:51.249150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.875 [2024-10-01 14:27:51.249244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.875 [2024-10-01 14:27:51.249258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.875 #60 NEW cov: 11807 ft: 14482 corp: 32/1017b lim: 40 exec/s: 30 rss: 70Mb L: 39/40 MS: 1 InsertByte- 00:08:08.875 #60 DONE cov: 11807 ft: 14482 corp: 32/1017b lim: 40 exec/s: 30 rss: 70Mb 00:08:08.875 Done 60 runs in 2 second(s) 00:08:09.134 14:27:51 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_10.conf 00:08:09.134 14:27:51 -- ../common.sh@72 -- # (( i++ )) 00:08:09.134 14:27:51 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:09.134 14:27:51 -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:08:09.134 14:27:51 -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:08:09.134 14:27:51 -- nvmf/run.sh@24 -- # local timen=1 00:08:09.134 14:27:51 -- nvmf/run.sh@25 -- # local core=0x1 00:08:09.134 14:27:51 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:08:09.134 14:27:51 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:08:09.134 14:27:51 -- nvmf/run.sh@29 -- # printf %02d 11 00:08:09.134 14:27:51 -- nvmf/run.sh@29 -- # port=4411 00:08:09.134 14:27:51 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:08:09.134 14:27:51 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:08:09.134 14:27:51 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:09.134 14:27:51 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 -r /var/tmp/spdk11.sock 00:08:09.134 [2024-10-01 14:27:51.438483] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:09.134 [2024-10-01 14:27:51.438547] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid704605 ] 00:08:09.134 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.134 [2024-10-01 14:27:51.635065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.392 [2024-10-01 14:27:51.706484] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:09.392 [2024-10-01 14:27:51.706623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.392 [2024-10-01 14:27:51.765562] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.392 [2024-10-01 14:27:51.781781] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:08:09.392 INFO: Running with entropic power schedule (0xFF, 100). 00:08:09.392 INFO: Seed: 259183229 00:08:09.392 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:09.392 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:09.392 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:08:09.392 INFO: A corpus is not provided, starting from an empty corpus 00:08:09.392 #2 INITED exec/s: 0 rss: 62Mb 00:08:09.392 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:09.392 This may also happen if the target rejected all inputs we tried so far 00:08:09.392 [2024-10-01 14:27:51.859642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.392 [2024-10-01 14:27:51.859679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.392 [2024-10-01 14:27:51.859805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.392 [2024-10-01 14:27:51.859822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.651 NEW_FUNC[1/671]: 0x4493f8 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:08:09.651 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:09.651 #4 NEW cov: 11592 ft: 11593 corp: 2/17b lim: 40 exec/s: 0 rss: 69Mb L: 16/16 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:09.910 [2024-10-01 14:27:52.190270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.910 [2024-10-01 14:27:52.190314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.910 [2024-10-01 14:27:52.190417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.910 [2024-10-01 14:27:52.190434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.910 #16 NEW cov: 11705 ft: 12102 corp: 3/34b lim: 40 exec/s: 0 rss: 69Mb L: 17/17 MS: 2 CrossOver-InsertRepeatedBytes- 00:08:09.910 [2024-10-01 14:27:52.240511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.910 [2024-10-01 14:27:52.240541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.910 [2024-10-01 14:27:52.240648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.910 [2024-10-01 14:27:52.240663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.910 #17 NEW cov: 11711 ft: 12271 corp: 4/50b lim: 40 exec/s: 0 rss: 69Mb L: 16/17 MS: 1 CrossOver- 00:08:09.910 [2024-10-01 14:27:52.300736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a002800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.910 [2024-10-01 14:27:52.300763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.910 [2024-10-01 14:27:52.300870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.910 [2024-10-01 14:27:52.300886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.910 #18 NEW cov: 11796 ft: 12585 corp: 5/68b lim: 40 exec/s: 0 rss: 69Mb L: 18/18 MS: 1 InsertByte- 00:08:09.910 [2024-10-01 14:27:52.361034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.910 [2024-10-01 14:27:52.361062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.910 [2024-10-01 14:27:52.361168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000a00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.910 [2024-10-01 14:27:52.361184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.910 #19 NEW cov: 11796 ft: 12648 corp: 6/86b lim: 40 exec/s: 0 rss: 69Mb L: 18/18 MS: 1 CopyPart- 00:08:09.910 [2024-10-01 14:27:52.422062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.910 [2024-10-01 14:27:52.422088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.910 [2024-10-01 14:27:52.422194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.910 [2024-10-01 14:27:52.422209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.910 [2024-10-01 14:27:52.422305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.910 [2024-10-01 14:27:52.422319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.910 [2024-10-01 14:27:52.422413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.910 [2024-10-01 14:27:52.422427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:10.169 #20 NEW cov: 11796 ft: 13080 corp: 7/124b lim: 40 exec/s: 0 rss: 69Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:08:10.169 [2024-10-01 14:27:52.471466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.169 [2024-10-01 14:27:52.471492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.169 [2024-10-01 14:27:52.471583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000a00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.169 [2024-10-01 14:27:52.471599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.169 #21 NEW cov: 11796 ft: 13140 corp: 8/142b lim: 40 exec/s: 0 rss: 69Mb L: 18/38 MS: 1 ChangeBit- 00:08:10.169 [2024-10-01 14:27:52.531501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.169 [2024-10-01 14:27:52.531527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.169 #22 NEW cov: 11796 ft: 13907 corp: 9/152b lim: 40 exec/s: 0 rss: 69Mb L: 10/38 MS: 1 EraseBytes- 00:08:10.169 [2024-10-01 14:27:52.581859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:ae000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.169 [2024-10-01 14:27:52.581884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.169 [2024-10-01 14:27:52.581973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.169 [2024-10-01 14:27:52.581988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.169 #23 NEW cov: 11796 ft: 13999 corp: 10/168b lim: 40 exec/s: 0 rss: 69Mb L: 16/38 MS: 1 ChangeByte- 00:08:10.169 [2024-10-01 14:27:52.631631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.169 [2024-10-01 14:27:52.631657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.169 #24 NEW cov: 11796 ft: 14044 corp: 11/181b lim: 40 exec/s: 0 rss: 69Mb L: 13/38 MS: 1 EraseBytes- 00:08:10.169 [2024-10-01 14:27:52.682213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.169 [2024-10-01 14:27:52.682239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.169 [2024-10-01 14:27:52.682348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00002400 cdw11:00000a00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.169 [2024-10-01 14:27:52.682366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.428 #25 NEW cov: 11796 ft: 14185 corp: 12/199b lim: 40 exec/s: 0 rss: 69Mb L: 18/38 MS: 1 ChangeByte- 00:08:10.428 [2024-10-01 14:27:52.743279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.428 [2024-10-01 14:27:52.743304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.428 [2024-10-01 14:27:52.743417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.428 [2024-10-01 14:27:52.743434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.428 [2024-10-01 14:27:52.743535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:e3e319e3 cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.428 [2024-10-01 14:27:52.743550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:10.429 [2024-10-01 14:27:52.743642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.429 [2024-10-01 14:27:52.743658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:10.429 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:10.429 #26 NEW cov: 11819 ft: 14255 corp: 13/237b lim: 40 exec/s: 0 rss: 70Mb L: 38/38 MS: 1 ChangeBinInt- 00:08:10.429 [2024-10-01 14:27:52.802725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a002800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.429 [2024-10-01 14:27:52.802749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.429 [2024-10-01 14:27:52.802849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.429 [2024-10-01 14:27:52.802864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.429 #27 NEW cov: 11819 ft: 14286 corp: 14/255b lim: 40 exec/s: 27 rss: 70Mb L: 18/38 MS: 1 ShuffleBytes- 00:08:10.429 [2024-10-01 14:27:52.863682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.429 [2024-10-01 14:27:52.863709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.429 [2024-10-01 14:27:52.863817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.429 [2024-10-01 14:27:52.863833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.429 [2024-10-01 14:27:52.863924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.429 [2024-10-01 14:27:52.863939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:10.429 [2024-10-01 14:27:52.864042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:0000ae00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.429 [2024-10-01 14:27:52.864058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:10.429 #28 NEW cov: 11819 ft: 14305 corp: 15/293b lim: 40 exec/s: 28 rss: 70Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:08:10.429 [2024-10-01 14:27:52.923966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.429 [2024-10-01 14:27:52.923991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.429 [2024-10-01 14:27:52.924087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.429 [2024-10-01 14:27:52.924102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.429 [2024-10-01 14:27:52.924200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.429 [2024-10-01 14:27:52.924214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:10.429 [2024-10-01 14:27:52.924310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e300 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.429 [2024-10-01 14:27:52.924325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:10.429 #29 NEW cov: 11819 ft: 14332 corp: 16/326b lim: 40 exec/s: 29 rss: 70Mb L: 33/38 MS: 1 EraseBytes- 00:08:10.687 [2024-10-01 14:27:52.973562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:ae000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.687 [2024-10-01 14:27:52.973588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.687 [2024-10-01 14:27:52.973701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0a00000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.687 [2024-10-01 14:27:52.973722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.687 #30 NEW cov: 11819 ft: 14340 corp: 17/346b lim: 40 exec/s: 30 rss: 70Mb L: 20/38 MS: 1 CrossOver- 00:08:10.687 [2024-10-01 14:27:53.025114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ae3e300 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.688 [2024-10-01 14:27:53.025140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.688 [2024-10-01 14:27:53.025249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.688 [2024-10-01 14:27:53.025264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.688 [2024-10-01 14:27:53.025361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:e3e3e3e3 cdw11:19e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.688 [2024-10-01 14:27:53.025376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:10.688 [2024-10-01 14:27:53.025465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.688 [2024-10-01 14:27:53.025482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:10.688 [2024-10-01 14:27:53.025584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:e3e3e3e3 cdw11:e3e30000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.688 [2024-10-01 14:27:53.025600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:10.688 #31 NEW cov: 11819 ft: 14435 corp: 18/386b lim: 40 exec/s: 31 rss: 70Mb L: 40/40 MS: 1 CopyPart- 00:08:10.688 [2024-10-01 14:27:53.085206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.688 [2024-10-01 14:27:53.085235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.688 [2024-10-01 14:27:53.085331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00001d1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.688 [2024-10-01 14:27:53.085348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.688 [2024-10-01 14:27:53.085435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:1c1ce6e3 cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.688 [2024-10-01 14:27:53.085452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:10.688 [2024-10-01 14:27:53.085553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.688 [2024-10-01 14:27:53.085569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:10.688 #32 NEW cov: 11819 ft: 14459 corp: 19/424b lim: 40 exec/s: 32 rss: 70Mb L: 38/40 MS: 1 ChangeBinInt- 00:08:10.688 [2024-10-01 14:27:53.134437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:ae000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.688 [2024-10-01 14:27:53.134465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.688 #33 NEW cov: 11819 ft: 14476 corp: 20/439b lim: 40 exec/s: 33 rss: 70Mb L: 15/40 MS: 1 EraseBytes- 00:08:10.688 [2024-10-01 14:27:53.185437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:ae000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.688 [2024-10-01 14:27:53.185463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.688 [2024-10-01 14:27:53.185561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.688 [2024-10-01 14:27:53.185579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.688 [2024-10-01 14:27:53.185679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.688 [2024-10-01 14:27:53.185696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:10.688 #34 NEW cov: 11819 ft: 14688 corp: 21/470b lim: 40 exec/s: 34 rss: 70Mb L: 31/40 MS: 1 CrossOver- 00:08:10.947 [2024-10-01 14:27:53.235125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.947 [2024-10-01 14:27:53.235152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.947 #35 NEW cov: 11819 ft: 14704 corp: 22/484b lim: 40 exec/s: 35 rss: 70Mb L: 14/40 MS: 1 CopyPart- 00:08:10.947 [2024-10-01 14:27:53.295811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.947 [2024-10-01 14:27:53.295839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.947 [2024-10-01 14:27:53.295942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.947 [2024-10-01 14:27:53.295958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.947 #36 NEW cov: 11819 ft: 14712 corp: 23/500b lim: 40 exec/s: 36 rss: 70Mb L: 16/40 MS: 1 ChangeBit- 00:08:10.947 [2024-10-01 14:27:53.346060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.947 [2024-10-01 14:27:53.346085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.947 [2024-10-01 14:27:53.346169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000a00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.947 [2024-10-01 14:27:53.346184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.947 #37 NEW cov: 11819 ft: 14742 corp: 24/518b lim: 40 exec/s: 37 rss: 70Mb L: 18/40 MS: 1 ChangeByte- 00:08:10.947 [2024-10-01 14:27:53.396328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:ae000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.947 [2024-10-01 14:27:53.396354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.947 [2024-10-01 14:27:53.396449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0a00000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.947 [2024-10-01 14:27:53.396466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.947 #38 NEW cov: 11819 ft: 14801 corp: 25/538b lim: 40 exec/s: 38 rss: 70Mb L: 20/40 MS: 1 ChangeByte- 00:08:10.947 [2024-10-01 14:27:53.456545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.947 [2024-10-01 14:27:53.456571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.947 [2024-10-01 14:27:53.456666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.947 [2024-10-01 14:27:53.456680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.205 #39 NEW cov: 11819 ft: 14830 corp: 26/555b lim: 40 exec/s: 39 rss: 70Mb L: 17/40 MS: 1 CopyPart- 00:08:11.205 [2024-10-01 14:27:53.517290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:ae000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.205 [2024-10-01 14:27:53.517320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.205 [2024-10-01 14:27:53.517415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:000000da cdw11:0a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.205 [2024-10-01 14:27:53.517432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.205 [2024-10-01 14:27:53.517527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.205 [2024-10-01 14:27:53.517543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.205 #40 NEW cov: 11819 ft: 14872 corp: 27/586b lim: 40 exec/s: 40 rss: 70Mb L: 31/40 MS: 1 ChangeByte- 00:08:11.205 [2024-10-01 14:27:53.577088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.205 [2024-10-01 14:27:53.577114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.205 [2024-10-01 14:27:53.577215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.205 [2024-10-01 14:27:53.577230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.205 #41 NEW cov: 11819 ft: 14883 corp: 28/603b lim: 40 exec/s: 41 rss: 70Mb L: 17/40 MS: 1 ShuffleBytes- 00:08:11.205 [2024-10-01 14:27:53.638446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ae3e300 cdw11:002a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.205 [2024-10-01 14:27:53.638471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.205 [2024-10-01 14:27:53.638568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.205 [2024-10-01 14:27:53.638582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.205 [2024-10-01 14:27:53.638685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:e3e3e3e3 cdw11:19e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.206 [2024-10-01 14:27:53.638699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.206 [2024-10-01 14:27:53.638810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.206 [2024-10-01 14:27:53.638826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.206 [2024-10-01 14:27:53.638920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:e3e3e3e3 cdw11:e3e30000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.206 [2024-10-01 14:27:53.638936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:11.206 #42 NEW cov: 11819 ft: 14893 corp: 29/643b lim: 40 exec/s: 42 rss: 70Mb L: 40/40 MS: 1 ChangeByte- 00:08:11.206 [2024-10-01 14:27:53.698359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.206 [2024-10-01 14:27:53.698384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.206 [2024-10-01 14:27:53.698486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.206 [2024-10-01 14:27:53.698505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.206 [2024-10-01 14:27:53.698611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.206 [2024-10-01 14:27:53.698628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.206 [2024-10-01 14:27:53.698725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.206 [2024-10-01 14:27:53.698740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.206 #43 NEW cov: 11819 ft: 14941 corp: 30/681b lim: 40 exec/s: 43 rss: 70Mb L: 38/40 MS: 1 ShuffleBytes- 00:08:11.465 [2024-10-01 14:27:53.748256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.465 [2024-10-01 14:27:53.748281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.465 [2024-10-01 14:27:53.748387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.465 [2024-10-01 14:27:53.748404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.465 [2024-10-01 14:27:53.748517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff24 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.465 [2024-10-01 14:27:53.748531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.465 #44 NEW cov: 11819 ft: 14942 corp: 31/712b lim: 40 exec/s: 44 rss: 70Mb L: 31/40 MS: 1 InsertRepeatedBytes- 00:08:11.465 [2024-10-01 14:27:53.807863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000a00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.465 [2024-10-01 14:27:53.807888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.465 #45 NEW cov: 11819 ft: 14987 corp: 32/726b lim: 40 exec/s: 22 rss: 70Mb L: 14/40 MS: 1 EraseBytes- 00:08:11.465 #45 DONE cov: 11819 ft: 14987 corp: 32/726b lim: 40 exec/s: 22 rss: 70Mb 00:08:11.465 Done 45 runs in 2 second(s) 00:08:11.465 14:27:53 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_11.conf 00:08:11.465 14:27:53 -- ../common.sh@72 -- # (( i++ )) 00:08:11.465 14:27:53 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:11.465 14:27:53 -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:08:11.465 14:27:53 -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:08:11.465 14:27:53 -- nvmf/run.sh@24 -- # local timen=1 00:08:11.465 14:27:53 -- nvmf/run.sh@25 -- # local core=0x1 00:08:11.465 14:27:53 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:11.465 14:27:53 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:08:11.465 14:27:53 -- nvmf/run.sh@29 -- # printf %02d 12 00:08:11.465 14:27:53 -- nvmf/run.sh@29 -- # port=4412 00:08:11.465 14:27:53 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:11.465 14:27:53 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:08:11.465 14:27:53 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:11.465 14:27:53 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 -r /var/tmp/spdk12.sock 00:08:11.724 [2024-10-01 14:27:54.005217] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:11.724 [2024-10-01 14:27:54.005289] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid704964 ] 00:08:11.724 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.983 [2024-10-01 14:27:54.325663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.983 [2024-10-01 14:27:54.415619] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:11.983 [2024-10-01 14:27:54.415759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.983 [2024-10-01 14:27:54.474524] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.983 [2024-10-01 14:27:54.490738] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:08:11.983 INFO: Running with entropic power schedule (0xFF, 100). 00:08:11.983 INFO: Seed: 2970196784 00:08:12.241 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:12.241 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:12.241 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:12.241 INFO: A corpus is not provided, starting from an empty corpus 00:08:12.241 #2 INITED exec/s: 0 rss: 61Mb 00:08:12.241 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:12.241 This may also happen if the target rejected all inputs we tried so far 00:08:12.241 [2024-10-01 14:27:54.568010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dddddddd cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.241 [2024-10-01 14:27:54.568061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.500 NEW_FUNC[1/671]: 0x44b168 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:08:12.500 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:12.500 #4 NEW cov: 11588 ft: 11589 corp: 2/16b lim: 40 exec/s: 0 rss: 68Mb L: 15/15 MS: 2 ChangeByte-InsertRepeatedBytes- 00:08:12.500 [2024-10-01 14:27:54.898565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dddddd0d cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.500 [2024-10-01 14:27:54.898619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.500 #5 NEW cov: 11703 ft: 12235 corp: 3/31b lim: 40 exec/s: 0 rss: 68Mb L: 15/15 MS: 1 ChangeByte- 00:08:12.500 [2024-10-01 14:27:54.958730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dddddddd cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.500 [2024-10-01 14:27:54.958760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.500 #6 NEW cov: 11709 ft: 12519 corp: 4/46b lim: 40 exec/s: 0 rss: 68Mb L: 15/15 MS: 1 CopyPart- 00:08:12.500 [2024-10-01 14:27:55.019339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dddddd0d cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.500 [2024-10-01 14:27:55.019368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.500 [2024-10-01 14:27:55.019460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:3bdddddd cdw11:dddddd60 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.500 [2024-10-01 14:27:55.019476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.759 #7 NEW cov: 11794 ft: 13458 corp: 5/62b lim: 40 exec/s: 0 rss: 68Mb L: 16/16 MS: 1 InsertByte- 00:08:12.759 [2024-10-01 14:27:55.069805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dddddd0d cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.759 [2024-10-01 14:27:55.069834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.759 [2024-10-01 14:27:55.069932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dd0ddddd cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.759 [2024-10-01 14:27:55.069948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.759 [2024-10-01 14:27:55.070042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:dddddddd cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.759 [2024-10-01 14:27:55.070058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.759 #8 NEW cov: 11794 ft: 13775 corp: 6/88b lim: 40 exec/s: 0 rss: 68Mb L: 26/26 MS: 1 CopyPart- 00:08:12.759 [2024-10-01 14:27:55.119546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dddddd0d cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.759 [2024-10-01 14:27:55.119573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.759 [2024-10-01 14:27:55.119677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:3bdddddd cdw11:dddddd60 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.759 [2024-10-01 14:27:55.119693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.759 #9 NEW cov: 11794 ft: 13869 corp: 7/105b lim: 40 exec/s: 0 rss: 69Mb L: 17/26 MS: 1 CrossOver- 00:08:12.759 [2024-10-01 14:27:55.169459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dddddd0d cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.759 [2024-10-01 14:27:55.169485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.759 #10 NEW cov: 11794 ft: 13955 corp: 8/120b lim: 40 exec/s: 0 rss: 69Mb L: 15/26 MS: 1 ShuffleBytes- 00:08:12.759 [2024-10-01 14:27:55.220055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dddddd0d cdw11:fddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.759 [2024-10-01 14:27:55.220081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.759 [2024-10-01 14:27:55.220181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:3bdddddd cdw11:dddddd60 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.759 [2024-10-01 14:27:55.220197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.759 #11 NEW cov: 11794 ft: 13991 corp: 9/136b lim: 40 exec/s: 0 rss: 69Mb L: 16/26 MS: 1 ChangeBit- 00:08:12.759 [2024-10-01 14:27:55.279933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dddddd0d cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.759 [2024-10-01 14:27:55.279960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.017 #12 NEW cov: 11794 ft: 14018 corp: 10/150b lim: 40 exec/s: 0 rss: 69Mb L: 14/26 MS: 1 EraseBytes- 00:08:13.017 [2024-10-01 14:27:55.330028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dddde6dd cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.017 [2024-10-01 14:27:55.330054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.017 #13 NEW cov: 11794 ft: 14042 corp: 11/165b lim: 40 exec/s: 0 rss: 69Mb L: 15/26 MS: 1 ChangeBinInt- 00:08:13.017 [2024-10-01 14:27:55.380188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:1b1b1b1b cdw11:1b1b1b1b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.017 [2024-10-01 14:27:55.380215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.017 #22 NEW cov: 11794 ft: 14074 corp: 12/176b lim: 40 exec/s: 0 rss: 69Mb L: 11/26 MS: 4 ShuffleBytes-ChangeByte-ChangeByte-InsertRepeatedBytes- 00:08:13.017 [2024-10-01 14:27:55.430362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dddddd0d cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.017 [2024-10-01 14:27:55.430390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.017 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:13.017 #23 NEW cov: 11817 ft: 14125 corp: 13/190b lim: 40 exec/s: 0 rss: 69Mb L: 14/26 MS: 1 ChangeBinInt- 00:08:13.017 [2024-10-01 14:27:55.490538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dde5e6dd cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.017 [2024-10-01 14:27:55.490565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.017 #24 NEW cov: 11817 ft: 14145 corp: 14/205b lim: 40 exec/s: 0 rss: 69Mb L: 15/26 MS: 1 ChangeByte- 00:08:13.017 [2024-10-01 14:27:55.540765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dddde6dd cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.017 [2024-10-01 14:27:55.540793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.276 #25 NEW cov: 11817 ft: 14166 corp: 15/220b lim: 40 exec/s: 25 rss: 69Mb L: 15/26 MS: 1 ChangeByte- 00:08:13.276 [2024-10-01 14:27:55.590952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dddddd0d cdw11:dddddda2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.276 [2024-10-01 14:27:55.590979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.276 #26 NEW cov: 11817 ft: 14191 corp: 16/235b lim: 40 exec/s: 26 rss: 69Mb L: 15/26 MS: 1 ChangeByte- 00:08:13.276 [2024-10-01 14:27:55.641076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dddde6dd cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.276 [2024-10-01 14:27:55.641102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.276 #27 NEW cov: 11817 ft: 14225 corp: 17/250b lim: 40 exec/s: 27 rss: 69Mb L: 15/26 MS: 1 ChangeByte- 00:08:13.276 [2024-10-01 14:27:55.692244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dddddd0d cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.276 [2024-10-01 14:27:55.692272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.276 [2024-10-01 14:27:55.692351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dddddddd cdw11:dd494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.276 [2024-10-01 14:27:55.692367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.276 [2024-10-01 14:27:55.692455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:49494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.276 [2024-10-01 14:27:55.692474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.276 [2024-10-01 14:27:55.692574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:49494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.276 [2024-10-01 14:27:55.692592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.276 #28 NEW cov: 11817 ft: 14538 corp: 18/286b lim: 40 exec/s: 28 rss: 69Mb L: 36/36 MS: 1 InsertRepeatedBytes- 00:08:13.276 [2024-10-01 14:27:55.742061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dddddd0d cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.276 [2024-10-01 14:27:55.742088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.276 [2024-10-01 14:27:55.742191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dddddddd cdw11:0ddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.276 [2024-10-01 14:27:55.742207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.276 [2024-10-01 14:27:55.742295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:dddddddd cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.276 [2024-10-01 14:27:55.742312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.276 #29 NEW cov: 11817 ft: 14551 corp: 19/312b lim: 40 exec/s: 29 rss: 69Mb L: 26/36 MS: 1 ShuffleBytes- 00:08:13.535 [2024-10-01 14:27:55.802501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dddddd0d cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.535 [2024-10-01 14:27:55.802532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.535 [2024-10-01 14:27:55.802625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dddddddd cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.535 [2024-10-01 14:27:55.802642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.535 [2024-10-01 14:27:55.802732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.535 [2024-10-01 14:27:55.802748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.535 #30 NEW cov: 11817 ft: 14573 corp: 20/343b lim: 40 exec/s: 30 rss: 69Mb L: 31/36 MS: 1 InsertRepeatedBytes- 00:08:13.535 [2024-10-01 14:27:55.852250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dddddd0d cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.535 [2024-10-01 14:27:55.852278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.535 [2024-10-01 14:27:55.852378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dddddddd cdw11:dddd60dd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.535 [2024-10-01 14:27:55.852394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.535 #31 NEW cov: 11817 ft: 14650 corp: 21/363b lim: 40 exec/s: 31 rss: 69Mb L: 20/36 MS: 1 CrossOver- 00:08:13.535 [2024-10-01 14:27:55.912524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dddddd0d cdw11:fddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.535 [2024-10-01 14:27:55.912552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.535 [2024-10-01 14:27:55.912636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:3bdddddd cdw11:dd5ddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.536 [2024-10-01 14:27:55.912652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.536 #32 NEW cov: 11817 ft: 14667 corp: 22/380b lim: 40 exec/s: 32 rss: 69Mb L: 17/36 MS: 1 InsertByte- 00:08:13.536 [2024-10-01 14:27:55.972686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dddde6dd cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.536 [2024-10-01 14:27:55.972714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.536 [2024-10-01 14:27:55.972810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dddddd2d cdw11:dddddd60 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.536 [2024-10-01 14:27:55.972825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.536 #33 NEW cov: 11817 ft: 14677 corp: 23/396b lim: 40 exec/s: 33 rss: 69Mb L: 16/36 MS: 1 InsertByte- 00:08:13.536 [2024-10-01 14:27:56.023289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:cddddd0d cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.536 [2024-10-01 14:27:56.023314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.536 [2024-10-01 14:27:56.023411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dddddddd cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.536 [2024-10-01 14:27:56.023427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.536 [2024-10-01 14:27:56.023517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.536 [2024-10-01 14:27:56.023532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.536 #34 NEW cov: 11817 ft: 14697 corp: 24/427b lim: 40 exec/s: 34 rss: 69Mb L: 31/36 MS: 1 ChangeBit- 00:08:13.795 [2024-10-01 14:27:56.083857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dfdddd0d cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.795 [2024-10-01 14:27:56.083886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.795 [2024-10-01 14:27:56.083985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dddddddd cdw11:dd494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.795 [2024-10-01 14:27:56.084001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.795 [2024-10-01 14:27:56.084087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:49494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.795 [2024-10-01 14:27:56.084103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.795 [2024-10-01 14:27:56.084188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:49494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.795 [2024-10-01 14:27:56.084204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.795 #35 NEW cov: 11817 ft: 14763 corp: 25/463b lim: 40 exec/s: 35 rss: 69Mb L: 36/36 MS: 1 ChangeBit- 00:08:13.795 [2024-10-01 14:27:56.143094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dddddddd cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.795 [2024-10-01 14:27:56.143120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.795 #36 NEW cov: 11817 ft: 14766 corp: 26/478b lim: 40 exec/s: 36 rss: 69Mb L: 15/36 MS: 1 ChangeBinInt- 00:08:13.795 [2024-10-01 14:27:56.193915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dddde6dd cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.795 [2024-10-01 14:27:56.193946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.795 [2024-10-01 14:27:56.194037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dddd28dd cdw11:dddd60ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.795 [2024-10-01 14:27:56.194053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.795 [2024-10-01 14:27:56.194150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.795 [2024-10-01 14:27:56.194166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.795 #37 NEW cov: 11817 ft: 14797 corp: 27/502b lim: 40 exec/s: 37 rss: 69Mb L: 24/36 MS: 1 InsertRepeatedBytes- 00:08:13.795 [2024-10-01 14:27:56.253949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dddddddd cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.795 [2024-10-01 14:27:56.253977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.795 [2024-10-01 14:27:56.254070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dddddddd cdw11:dddddd60 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.795 [2024-10-01 14:27:56.254087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.795 #38 NEW cov: 11817 ft: 14836 corp: 28/518b lim: 40 exec/s: 38 rss: 69Mb L: 16/36 MS: 1 CrossOver- 00:08:13.795 [2024-10-01 14:27:56.304493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dddddd0d cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.795 [2024-10-01 14:27:56.304522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.795 [2024-10-01 14:27:56.304601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dddddddd cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.795 [2024-10-01 14:27:56.304617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.795 [2024-10-01 14:27:56.304705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00002100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.795 [2024-10-01 14:27:56.304725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.053 #39 NEW cov: 11817 ft: 14857 corp: 29/549b lim: 40 exec/s: 39 rss: 69Mb L: 31/36 MS: 1 ChangeByte- 00:08:14.053 [2024-10-01 14:27:56.354272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dde6dddd cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.053 [2024-10-01 14:27:56.354301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.053 [2024-10-01 14:27:56.354394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dddddd2d cdw11:dddddd60 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.053 [2024-10-01 14:27:56.354411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.053 #40 NEW cov: 11817 ft: 14868 corp: 30/565b lim: 40 exec/s: 40 rss: 70Mb L: 16/36 MS: 1 ShuffleBytes- 00:08:14.053 [2024-10-01 14:27:56.414574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dddddd0d cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.053 [2024-10-01 14:27:56.414601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.053 [2024-10-01 14:27:56.414691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dddddddd cdw11:dddd14dd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.054 [2024-10-01 14:27:56.414707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.054 #41 NEW cov: 11817 ft: 14888 corp: 31/585b lim: 40 exec/s: 41 rss: 70Mb L: 20/36 MS: 1 ChangeBinInt- 00:08:14.054 [2024-10-01 14:27:56.475058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dddddd0d cdw11:dddddda2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.054 [2024-10-01 14:27:56.475083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.054 [2024-10-01 14:27:56.475161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dddddddd cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.054 [2024-10-01 14:27:56.475175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.054 [2024-10-01 14:27:56.475257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.054 [2024-10-01 14:27:56.475271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.054 #42 NEW cov: 11817 ft: 14902 corp: 32/612b lim: 40 exec/s: 42 rss: 70Mb L: 27/36 MS: 1 InsertRepeatedBytes- 00:08:14.054 [2024-10-01 14:27:56.534614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:dddddddd cdw11:60dddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.054 [2024-10-01 14:27:56.534640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.054 #43 NEW cov: 11817 ft: 14944 corp: 33/622b lim: 40 exec/s: 21 rss: 70Mb L: 10/36 MS: 1 EraseBytes- 00:08:14.054 #43 DONE cov: 11817 ft: 14944 corp: 33/622b lim: 40 exec/s: 21 rss: 70Mb 00:08:14.054 Done 43 runs in 2 second(s) 00:08:14.313 14:27:56 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_12.conf 00:08:14.313 14:27:56 -- ../common.sh@72 -- # (( i++ )) 00:08:14.313 14:27:56 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:14.313 14:27:56 -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:08:14.313 14:27:56 -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:08:14.313 14:27:56 -- nvmf/run.sh@24 -- # local timen=1 00:08:14.313 14:27:56 -- nvmf/run.sh@25 -- # local core=0x1 00:08:14.313 14:27:56 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:14.313 14:27:56 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:08:14.313 14:27:56 -- nvmf/run.sh@29 -- # printf %02d 13 00:08:14.313 14:27:56 -- nvmf/run.sh@29 -- # port=4413 00:08:14.313 14:27:56 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:14.313 14:27:56 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:08:14.313 14:27:56 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:14.313 14:27:56 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 -r /var/tmp/spdk13.sock 00:08:14.313 [2024-10-01 14:27:56.724270] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:14.313 [2024-10-01 14:27:56.724349] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid705334 ] 00:08:14.314 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.573 [2024-10-01 14:27:57.036552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.832 [2024-10-01 14:27:57.123973] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:14.832 [2024-10-01 14:27:57.124114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.832 [2024-10-01 14:27:57.182559] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.832 [2024-10-01 14:27:57.198783] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:08:14.832 INFO: Running with entropic power schedule (0xFF, 100). 00:08:14.832 INFO: Seed: 1382190950 00:08:14.832 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:14.832 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:14.832 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:14.832 INFO: A corpus is not provided, starting from an empty corpus 00:08:14.832 #2 INITED exec/s: 0 rss: 61Mb 00:08:14.832 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:14.832 This may also happen if the target rejected all inputs we tried so far 00:08:14.832 [2024-10-01 14:27:57.277029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.832 [2024-10-01 14:27:57.277077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.832 [2024-10-01 14:27:57.277240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.832 [2024-10-01 14:27:57.277260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.832 [2024-10-01 14:27:57.277391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.832 [2024-10-01 14:27:57.277413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.832 [2024-10-01 14:27:57.277552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.832 [2024-10-01 14:27:57.277572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:15.090 NEW_FUNC[1/670]: 0x44cd38 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:08:15.090 NEW_FUNC[2/670]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:15.090 #8 NEW cov: 11578 ft: 11579 corp: 2/37b lim: 40 exec/s: 0 rss: 68Mb L: 36/36 MS: 1 InsertRepeatedBytes- 00:08:15.349 [2024-10-01 14:27:57.617386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0adedede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.349 [2024-10-01 14:27:57.617437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.349 [2024-10-01 14:27:57.617539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:dededede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.349 [2024-10-01 14:27:57.617560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.349 [2024-10-01 14:27:57.617662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:dededede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.349 [2024-10-01 14:27:57.617682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.349 #9 NEW cov: 11691 ft: 12664 corp: 3/64b lim: 40 exec/s: 0 rss: 68Mb L: 27/36 MS: 1 InsertRepeatedBytes- 00:08:15.349 [2024-10-01 14:27:57.677942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affff0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.349 [2024-10-01 14:27:57.677970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.349 [2024-10-01 14:27:57.678060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.349 [2024-10-01 14:27:57.678075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.349 [2024-10-01 14:27:57.678170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff01e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.349 [2024-10-01 14:27:57.678186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.349 [2024-10-01 14:27:57.678268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.349 [2024-10-01 14:27:57.678284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:15.349 #12 NEW cov: 11697 ft: 12935 corp: 4/100b lim: 40 exec/s: 0 rss: 68Mb L: 36/36 MS: 3 CMP-ChangeByte-CrossOver- DE: "\377\377\001\000"- 00:08:15.349 [2024-10-01 14:27:57.727978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0adedede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.349 [2024-10-01 14:27:57.728002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.349 [2024-10-01 14:27:57.728106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:dededede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.349 [2024-10-01 14:27:57.728122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.349 [2024-10-01 14:27:57.728210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:dededed7 cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.349 [2024-10-01 14:27:57.728224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.349 #13 NEW cov: 11782 ft: 13163 corp: 5/127b lim: 40 exec/s: 0 rss: 69Mb L: 27/36 MS: 1 ChangeBinInt- 00:08:15.349 [2024-10-01 14:27:57.788287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0adedede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.349 [2024-10-01 14:27:57.788313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.349 [2024-10-01 14:27:57.788410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:dededede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.349 [2024-10-01 14:27:57.788426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.349 [2024-10-01 14:27:57.788527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:dededede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.349 [2024-10-01 14:27:57.788542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.349 #14 NEW cov: 11782 ft: 13263 corp: 6/154b lim: 40 exec/s: 0 rss: 69Mb L: 27/36 MS: 1 ShuffleBytes- 00:08:15.349 [2024-10-01 14:27:57.838605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0adedede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.349 [2024-10-01 14:27:57.838629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.349 [2024-10-01 14:27:57.838730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:dededede cdw11:de65dede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.349 [2024-10-01 14:27:57.838745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.349 [2024-10-01 14:27:57.838853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:dededed7 cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.349 [2024-10-01 14:27:57.838868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.349 #15 NEW cov: 11782 ft: 13355 corp: 7/181b lim: 40 exec/s: 0 rss: 69Mb L: 27/36 MS: 1 ChangeByte- 00:08:15.608 [2024-10-01 14:27:57.899016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0adedede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.608 [2024-10-01 14:27:57.899044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.609 [2024-10-01 14:27:57.899146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0adedede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.609 [2024-10-01 14:27:57.899173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.609 [2024-10-01 14:27:57.899272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:dededede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.609 [2024-10-01 14:27:57.899287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.609 [2024-10-01 14:27:57.899382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:dededede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.609 [2024-10-01 14:27:57.899397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:15.609 #16 NEW cov: 11782 ft: 13404 corp: 8/219b lim: 40 exec/s: 0 rss: 69Mb L: 38/38 MS: 1 CrossOver- 00:08:15.609 [2024-10-01 14:27:57.949526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.609 [2024-10-01 14:27:57.949551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.609 [2024-10-01 14:27:57.949651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.609 [2024-10-01 14:27:57.949667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.609 [2024-10-01 14:27:57.949738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.609 [2024-10-01 14:27:57.949752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.609 [2024-10-01 14:27:57.949846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.609 [2024-10-01 14:27:57.949861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:15.609 #17 NEW cov: 11782 ft: 13432 corp: 9/255b lim: 40 exec/s: 0 rss: 69Mb L: 36/38 MS: 1 CopyPart- 00:08:15.609 [2024-10-01 14:27:58.009539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0adedede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.609 [2024-10-01 14:27:58.009567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.609 [2024-10-01 14:27:58.009665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:dedede30 cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.609 [2024-10-01 14:27:58.009682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.609 [2024-10-01 14:27:58.009780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:dededed7 cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.609 [2024-10-01 14:27:58.009795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.609 #18 NEW cov: 11782 ft: 13437 corp: 10/282b lim: 40 exec/s: 0 rss: 69Mb L: 27/38 MS: 1 ChangeByte- 00:08:15.609 [2024-10-01 14:27:58.059454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0adedede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.609 [2024-10-01 14:27:58.059480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.609 [2024-10-01 14:27:58.059570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:dededede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.609 [2024-10-01 14:27:58.059585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.609 #19 NEW cov: 11782 ft: 13704 corp: 11/305b lim: 40 exec/s: 0 rss: 69Mb L: 23/38 MS: 1 EraseBytes- 00:08:15.609 [2024-10-01 14:27:58.110397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0adeda33 cdw11:8d030000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.609 [2024-10-01 14:27:58.110423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.609 [2024-10-01 14:27:58.110508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0000dede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.609 [2024-10-01 14:27:58.110523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.609 [2024-10-01 14:27:58.110613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:dededede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.609 [2024-10-01 14:27:58.110628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.609 [2024-10-01 14:27:58.110713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:dededede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.609 [2024-10-01 14:27:58.110732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:15.609 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:15.868 #20 NEW cov: 11805 ft: 13740 corp: 12/340b lim: 40 exec/s: 0 rss: 69Mb L: 35/38 MS: 1 CMP- DE: "\3323\215\003\000\000\000\000"- 00:08:15.868 [2024-10-01 14:27:58.160212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0adedede cdw11:dede5bde SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.868 [2024-10-01 14:27:58.160238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.868 [2024-10-01 14:27:58.160333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:dededede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.868 [2024-10-01 14:27:58.160347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.868 #21 NEW cov: 11805 ft: 13880 corp: 13/363b lim: 40 exec/s: 0 rss: 69Mb L: 23/38 MS: 1 ChangeByte- 00:08:15.868 [2024-10-01 14:27:58.220918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0adeda33 cdw11:8d030000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.868 [2024-10-01 14:27:58.220943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.868 [2024-10-01 14:27:58.221040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0000dede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.868 [2024-10-01 14:27:58.221056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.868 [2024-10-01 14:27:58.221147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:dededede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.868 [2024-10-01 14:27:58.221162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.868 #22 NEW cov: 11805 ft: 13998 corp: 14/388b lim: 40 exec/s: 22 rss: 69Mb L: 25/38 MS: 1 EraseBytes- 00:08:15.868 [2024-10-01 14:27:58.281654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.868 [2024-10-01 14:27:58.281679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.868 [2024-10-01 14:27:58.281773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.868 [2024-10-01 14:27:58.281788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.868 [2024-10-01 14:27:58.281878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.868 [2024-10-01 14:27:58.281893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.868 [2024-10-01 14:27:58.281988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.868 [2024-10-01 14:27:58.282004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:15.868 #23 NEW cov: 11805 ft: 14002 corp: 15/424b lim: 40 exec/s: 23 rss: 69Mb L: 36/38 MS: 1 ChangeByte- 00:08:15.868 [2024-10-01 14:27:58.331887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.868 [2024-10-01 14:27:58.331912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.868 [2024-10-01 14:27:58.332009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.868 [2024-10-01 14:27:58.332024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.868 [2024-10-01 14:27:58.332115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.868 [2024-10-01 14:27:58.332129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.868 [2024-10-01 14:27:58.332222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff87ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.868 [2024-10-01 14:27:58.332238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:15.868 #24 NEW cov: 11805 ft: 14015 corp: 16/460b lim: 40 exec/s: 24 rss: 69Mb L: 36/38 MS: 1 ChangeByte- 00:08:16.127 [2024-10-01 14:27:58.392439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.127 [2024-10-01 14:27:58.392467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.127 [2024-10-01 14:27:58.392562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.127 [2024-10-01 14:27:58.392578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.127 [2024-10-01 14:27:58.392665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.127 [2024-10-01 14:27:58.392682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.127 [2024-10-01 14:27:58.392778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff87ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.127 [2024-10-01 14:27:58.392794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.127 #25 NEW cov: 11805 ft: 14045 corp: 17/496b lim: 40 exec/s: 25 rss: 69Mb L: 36/38 MS: 1 CrossOver- 00:08:16.127 [2024-10-01 14:27:58.451548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0adedede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.127 [2024-10-01 14:27:58.451576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.127 #26 NEW cov: 11805 ft: 14390 corp: 18/509b lim: 40 exec/s: 26 rss: 69Mb L: 13/38 MS: 1 CrossOver- 00:08:16.127 [2024-10-01 14:27:58.502398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0adeda33 cdw11:8d030000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.127 [2024-10-01 14:27:58.502426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.127 [2024-10-01 14:27:58.502513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0000dede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.127 [2024-10-01 14:27:58.502531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.127 [2024-10-01 14:27:58.502618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:dededede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.127 [2024-10-01 14:27:58.502635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.127 #27 NEW cov: 11805 ft: 14429 corp: 19/534b lim: 40 exec/s: 27 rss: 69Mb L: 25/38 MS: 1 ShuffleBytes- 00:08:16.127 [2024-10-01 14:27:58.562305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0ada338d cdw11:03000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.127 [2024-10-01 14:27:58.562332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.127 #30 NEW cov: 11805 ft: 14449 corp: 20/543b lim: 40 exec/s: 30 rss: 69Mb L: 9/38 MS: 3 CopyPart-ShuffleBytes-PersAutoDict- DE: "\3323\215\003\000\000\000\000"- 00:08:16.127 [2024-10-01 14:27:58.613618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affff0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.127 [2024-10-01 14:27:58.613645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.127 [2024-10-01 14:27:58.613753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.127 [2024-10-01 14:27:58.613771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.127 [2024-10-01 14:27:58.613864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.127 [2024-10-01 14:27:58.613881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.127 [2024-10-01 14:27:58.613975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff87ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.127 [2024-10-01 14:27:58.613991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.127 #31 NEW cov: 11805 ft: 14510 corp: 21/579b lim: 40 exec/s: 31 rss: 69Mb L: 36/38 MS: 1 CrossOver- 00:08:16.386 [2024-10-01 14:27:58.673805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0adedede cdw11:dededa33 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.386 [2024-10-01 14:27:58.673845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.386 [2024-10-01 14:27:58.673943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:8d030000 cdw11:0000dede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.386 [2024-10-01 14:27:58.673958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.386 [2024-10-01 14:27:58.674053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:dededede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.386 [2024-10-01 14:27:58.674070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.386 [2024-10-01 14:27:58.674165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:dededede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.386 [2024-10-01 14:27:58.674182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.386 #32 NEW cov: 11805 ft: 14529 corp: 22/617b lim: 40 exec/s: 32 rss: 69Mb L: 38/38 MS: 1 PersAutoDict- DE: "\3323\215\003\000\000\000\000"- 00:08:16.387 [2024-10-01 14:27:58.733464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0adede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.387 [2024-10-01 14:27:58.733490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.387 #33 NEW cov: 11805 ft: 14561 corp: 23/626b lim: 40 exec/s: 33 rss: 69Mb L: 9/38 MS: 1 CrossOver- 00:08:16.387 [2024-10-01 14:27:58.784212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0adedede cdw11:dede5bde SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.387 [2024-10-01 14:27:58.784237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.387 [2024-10-01 14:27:58.784331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:dededede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.387 [2024-10-01 14:27:58.784347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.387 [2024-10-01 14:27:58.784431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:dededede cdw11:79797979 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.387 [2024-10-01 14:27:58.784449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.387 #34 NEW cov: 11805 ft: 14576 corp: 24/654b lim: 40 exec/s: 34 rss: 70Mb L: 28/38 MS: 1 InsertRepeatedBytes- 00:08:16.387 [2024-10-01 14:27:58.845048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affff0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.387 [2024-10-01 14:27:58.845075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.387 [2024-10-01 14:27:58.845189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.387 [2024-10-01 14:27:58.845207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.387 [2024-10-01 14:27:58.845294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff01e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.387 [2024-10-01 14:27:58.845309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.387 [2024-10-01 14:27:58.845398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff28ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.387 [2024-10-01 14:27:58.845413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.387 #35 NEW cov: 11805 ft: 14619 corp: 25/690b lim: 40 exec/s: 35 rss: 70Mb L: 36/38 MS: 1 ChangeByte- 00:08:16.387 [2024-10-01 14:27:58.895233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.387 [2024-10-01 14:27:58.895259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.387 [2024-10-01 14:27:58.895328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.387 [2024-10-01 14:27:58.895343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.387 [2024-10-01 14:27:58.895435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff60ff5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.387 [2024-10-01 14:27:58.895450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.387 [2024-10-01 14:27:58.895551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff87ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.387 [2024-10-01 14:27:58.895567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.647 #36 NEW cov: 11805 ft: 14642 corp: 26/726b lim: 40 exec/s: 36 rss: 70Mb L: 36/38 MS: 1 ChangeByte- 00:08:16.647 [2024-10-01 14:27:58.955391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0adedede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.647 [2024-10-01 14:27:58.955416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.647 [2024-10-01 14:27:58.955511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:dededede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.647 [2024-10-01 14:27:58.955527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.647 [2024-10-01 14:27:58.955625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:dededeff cdw11:fffff5de SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.647 [2024-10-01 14:27:58.955645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.647 #37 NEW cov: 11805 ft: 14675 corp: 27/753b lim: 40 exec/s: 37 rss: 70Mb L: 27/38 MS: 1 CMP- DE: "\377\377\377\365"- 00:08:16.647 [2024-10-01 14:27:59.005827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0adedede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.647 [2024-10-01 14:27:59.005852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.647 [2024-10-01 14:27:59.005938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:dededed0 cdw11:21212121 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.647 [2024-10-01 14:27:59.005954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.647 [2024-10-01 14:27:59.006054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:212117d7 cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.647 [2024-10-01 14:27:59.006068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.647 #38 NEW cov: 11805 ft: 14681 corp: 28/780b lim: 40 exec/s: 38 rss: 70Mb L: 27/38 MS: 1 ChangeBinInt- 00:08:16.647 [2024-10-01 14:27:59.066531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0adedede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.647 [2024-10-01 14:27:59.066556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.647 [2024-10-01 14:27:59.066649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:dededed0 cdw11:21212121 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.647 [2024-10-01 14:27:59.066665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.647 [2024-10-01 14:27:59.066763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:212117d7 cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.647 [2024-10-01 14:27:59.066778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.647 [2024-10-01 14:27:59.066862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:deffffff cdw11:ffffffde SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.647 [2024-10-01 14:27:59.066878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.647 #39 NEW cov: 11805 ft: 14719 corp: 29/813b lim: 40 exec/s: 39 rss: 70Mb L: 33/38 MS: 1 InsertRepeatedBytes- 00:08:16.647 [2024-10-01 14:27:59.125966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0ada338d cdw11:03000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.647 [2024-10-01 14:27:59.125991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.647 #40 NEW cov: 11805 ft: 14751 corp: 30/821b lim: 40 exec/s: 40 rss: 70Mb L: 8/38 MS: 1 EraseBytes- 00:08:16.907 [2024-10-01 14:27:59.186131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0adedede cdw11:dedeffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.907 [2024-10-01 14:27:59.186158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.907 #41 NEW cov: 11805 ft: 14768 corp: 31/835b lim: 40 exec/s: 41 rss: 70Mb L: 14/38 MS: 1 EraseBytes- 00:08:16.907 [2024-10-01 14:27:59.247037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0adedede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.907 [2024-10-01 14:27:59.247065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.907 [2024-10-01 14:27:59.247149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:dededede cdw11:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.907 [2024-10-01 14:27:59.247165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.907 [2024-10-01 14:27:59.247258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:dededeff cdw11:fffff5de SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.907 [2024-10-01 14:27:59.247273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.907 #42 NEW cov: 11805 ft: 14784 corp: 32/866b lim: 40 exec/s: 21 rss: 70Mb L: 31/38 MS: 1 PersAutoDict- DE: "\377\377\001\000"- 00:08:16.907 #42 DONE cov: 11805 ft: 14784 corp: 32/866b lim: 40 exec/s: 21 rss: 70Mb 00:08:16.907 ###### Recommended dictionary. ###### 00:08:16.907 "\377\377\001\000" # Uses: 1 00:08:16.907 "\3323\215\003\000\000\000\000" # Uses: 2 00:08:16.907 "\377\377\377\365" # Uses: 0 00:08:16.907 ###### End of recommended dictionary. ###### 00:08:16.907 Done 42 runs in 2 second(s) 00:08:16.907 14:27:59 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_13.conf 00:08:16.907 14:27:59 -- ../common.sh@72 -- # (( i++ )) 00:08:16.907 14:27:59 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:16.907 14:27:59 -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:08:16.907 14:27:59 -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:08:16.907 14:27:59 -- nvmf/run.sh@24 -- # local timen=1 00:08:16.907 14:27:59 -- nvmf/run.sh@25 -- # local core=0x1 00:08:16.907 14:27:59 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:16.907 14:27:59 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:08:16.907 14:27:59 -- nvmf/run.sh@29 -- # printf %02d 14 00:08:16.907 14:27:59 -- nvmf/run.sh@29 -- # port=4414 00:08:16.907 14:27:59 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:16.907 14:27:59 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:08:16.907 14:27:59 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:16.907 14:27:59 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 -r /var/tmp/spdk14.sock 00:08:17.167 [2024-10-01 14:27:59.439329] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:17.167 [2024-10-01 14:27:59.439417] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid705695 ] 00:08:17.167 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.426 [2024-10-01 14:27:59.752855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.426 [2024-10-01 14:27:59.841318] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:17.426 [2024-10-01 14:27:59.841464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.426 [2024-10-01 14:27:59.899930] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.426 [2024-10-01 14:27:59.916143] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:08:17.426 INFO: Running with entropic power schedule (0xFF, 100). 00:08:17.426 INFO: Seed: 4100197117 00:08:17.685 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:17.685 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:17.685 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:17.685 INFO: A corpus is not provided, starting from an empty corpus 00:08:17.685 #2 INITED exec/s: 0 rss: 61Mb 00:08:17.685 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:17.685 This may also happen if the target rejected all inputs we tried so far 00:08:17.685 [2024-10-01 14:27:59.988229] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.685 [2024-10-01 14:27:59.988280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:17.685 [2024-10-01 14:27:59.988426] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.685 [2024-10-01 14:27:59.988457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.685 [2024-10-01 14:27:59.988605] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.685 [2024-10-01 14:27:59.988638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.685 [2024-10-01 14:27:59.988785] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.685 [2024-10-01 14:27:59.988817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:17.943 NEW_FUNC[1/671]: 0x44e908 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:08:17.943 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:17.943 #6 NEW cov: 11572 ft: 11573 corp: 2/31b lim: 35 exec/s: 0 rss: 68Mb L: 30/30 MS: 4 ShuffleBytes-ChangeBit-ChangeBit-InsertRepeatedBytes- 00:08:17.943 [2024-10-01 14:28:00.328805] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.943 [2024-10-01 14:28:00.328865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:17.943 [2024-10-01 14:28:00.328951] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.943 [2024-10-01 14:28:00.328969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.943 [2024-10-01 14:28:00.329061] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.943 [2024-10-01 14:28:00.329076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.943 [2024-10-01 14:28:00.329165] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.943 [2024-10-01 14:28:00.329181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:17.943 #7 NEW cov: 11685 ft: 12022 corp: 3/61b lim: 35 exec/s: 0 rss: 68Mb L: 30/30 MS: 1 ChangeByte- 00:08:17.943 [2024-10-01 14:28:00.388879] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.943 [2024-10-01 14:28:00.388907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:17.943 [2024-10-01 14:28:00.388991] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.943 [2024-10-01 14:28:00.389012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.943 [2024-10-01 14:28:00.389108] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:000000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.943 [2024-10-01 14:28:00.389125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.943 [2024-10-01 14:28:00.389212] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.943 [2024-10-01 14:28:00.389233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:17.943 #13 NEW cov: 11698 ft: 12330 corp: 4/92b lim: 35 exec/s: 0 rss: 69Mb L: 31/31 MS: 1 InsertByte- 00:08:17.943 [2024-10-01 14:28:00.438687] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.944 [2024-10-01 14:28:00.438716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:17.944 [2024-10-01 14:28:00.438803] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ba SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.944 [2024-10-01 14:28:00.438821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.944 [2024-10-01 14:28:00.438904] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ba SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.944 [2024-10-01 14:28:00.438923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.944 NEW_FUNC[1/1]: 0x46fd38 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:08:17.944 #14 NEW cov: 11793 ft: 12965 corp: 5/115b lim: 35 exec/s: 0 rss: 69Mb L: 23/31 MS: 1 InsertRepeatedBytes- 00:08:18.201 [2024-10-01 14:28:00.489676] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.201 [2024-10-01 14:28:00.489705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.201 [2024-10-01 14:28:00.489794] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.201 [2024-10-01 14:28:00.489812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.201 [2024-10-01 14:28:00.489903] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.201 [2024-10-01 14:28:00.489918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.201 [2024-10-01 14:28:00.490003] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.201 [2024-10-01 14:28:00.490020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.201 [2024-10-01 14:28:00.490109] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.201 [2024-10-01 14:28:00.490126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:18.201 #15 NEW cov: 11793 ft: 13117 corp: 6/150b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:08:18.201 [2024-10-01 14:28:00.549559] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.201 [2024-10-01 14:28:00.549587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.201 [2024-10-01 14:28:00.549677] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.201 [2024-10-01 14:28:00.549697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.201 [2024-10-01 14:28:00.549786] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.201 [2024-10-01 14:28:00.549803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.201 [2024-10-01 14:28:00.549887] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.201 [2024-10-01 14:28:00.549905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.201 #16 NEW cov: 11793 ft: 13199 corp: 7/183b lim: 35 exec/s: 0 rss: 69Mb L: 33/35 MS: 1 EraseBytes- 00:08:18.201 [2024-10-01 14:28:00.609717] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.201 [2024-10-01 14:28:00.609748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.202 [2024-10-01 14:28:00.609841] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:000000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.202 [2024-10-01 14:28:00.609856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.202 [2024-10-01 14:28:00.609946] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.202 [2024-10-01 14:28:00.609964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.202 [2024-10-01 14:28:00.610055] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.202 [2024-10-01 14:28:00.610073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.202 #17 NEW cov: 11793 ft: 13274 corp: 8/217b lim: 35 exec/s: 0 rss: 69Mb L: 34/35 MS: 1 InsertRepeatedBytes- 00:08:18.202 [2024-10-01 14:28:00.659542] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.202 [2024-10-01 14:28:00.659570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.202 [2024-10-01 14:28:00.659660] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ba SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.202 [2024-10-01 14:28:00.659677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.202 [2024-10-01 14:28:00.659766] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ba SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.202 [2024-10-01 14:28:00.659786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.202 #18 NEW cov: 11793 ft: 13318 corp: 9/240b lim: 35 exec/s: 0 rss: 69Mb L: 23/35 MS: 1 CrossOver- 00:08:18.202 [2024-10-01 14:28:00.720389] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.202 [2024-10-01 14:28:00.720422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.202 [2024-10-01 14:28:00.720511] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.202 [2024-10-01 14:28:00.720530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.202 [2024-10-01 14:28:00.720622] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.202 [2024-10-01 14:28:00.720639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.202 [2024-10-01 14:28:00.720732] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.202 [2024-10-01 14:28:00.720749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.202 [2024-10-01 14:28:00.720836] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.202 [2024-10-01 14:28:00.720852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:18.461 #19 NEW cov: 11793 ft: 13409 corp: 10/275b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 CopyPart- 00:08:18.461 [2024-10-01 14:28:00.770273] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.461 [2024-10-01 14:28:00.770301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.461 [2024-10-01 14:28:00.770396] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.461 [2024-10-01 14:28:00.770414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.461 [2024-10-01 14:28:00.770500] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.461 [2024-10-01 14:28:00.770518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.461 [2024-10-01 14:28:00.770616] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.461 [2024-10-01 14:28:00.770634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.461 #20 NEW cov: 11793 ft: 13462 corp: 11/309b lim: 35 exec/s: 0 rss: 69Mb L: 34/35 MS: 1 CopyPart- 00:08:18.461 [2024-10-01 14:28:00.830797] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.461 [2024-10-01 14:28:00.830824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.461 [2024-10-01 14:28:00.830918] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.461 [2024-10-01 14:28:00.830936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.461 [2024-10-01 14:28:00.831015] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.461 [2024-10-01 14:28:00.831034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.461 [2024-10-01 14:28:00.831129] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.461 [2024-10-01 14:28:00.831145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.461 [2024-10-01 14:28:00.831231] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.461 [2024-10-01 14:28:00.831251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:18.461 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:18.461 #21 NEW cov: 11816 ft: 13501 corp: 12/344b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 InsertByte- 00:08:18.461 [2024-10-01 14:28:00.890624] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.461 [2024-10-01 14:28:00.890650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.461 [2024-10-01 14:28:00.890742] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.461 [2024-10-01 14:28:00.890770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.461 [2024-10-01 14:28:00.890863] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000b3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.461 [2024-10-01 14:28:00.890880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.461 [2024-10-01 14:28:00.890976] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000b3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.461 [2024-10-01 14:28:00.890991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.461 #28 NEW cov: 11816 ft: 13540 corp: 13/378b lim: 35 exec/s: 0 rss: 69Mb L: 34/35 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:08:18.461 [2024-10-01 14:28:00.940867] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.461 [2024-10-01 14:28:00.940894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.461 [2024-10-01 14:28:00.940985] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.461 [2024-10-01 14:28:00.941003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.461 [2024-10-01 14:28:00.941098] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.461 [2024-10-01 14:28:00.941116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.461 [2024-10-01 14:28:00.941209] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.461 [2024-10-01 14:28:00.941228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.461 #29 NEW cov: 11816 ft: 13558 corp: 14/412b lim: 35 exec/s: 29 rss: 69Mb L: 34/35 MS: 1 CrossOver- 00:08:18.721 [2024-10-01 14:28:01.000710] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.721 [2024-10-01 14:28:01.000740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.721 [2024-10-01 14:28:01.000836] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ba SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.721 [2024-10-01 14:28:01.000853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.721 [2024-10-01 14:28:01.000944] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ba SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.721 [2024-10-01 14:28:01.000964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.721 #30 NEW cov: 11816 ft: 13584 corp: 15/435b lim: 35 exec/s: 30 rss: 69Mb L: 23/35 MS: 1 ShuffleBytes- 00:08:18.721 [2024-10-01 14:28:01.061568] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.721 [2024-10-01 14:28:01.061596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.721 [2024-10-01 14:28:01.061698] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.721 [2024-10-01 14:28:01.061717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.721 [2024-10-01 14:28:01.061811] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.721 [2024-10-01 14:28:01.061830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.721 [2024-10-01 14:28:01.061917] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.721 [2024-10-01 14:28:01.061938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.721 [2024-10-01 14:28:01.062033] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000f7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.721 [2024-10-01 14:28:01.062052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:18.721 #31 NEW cov: 11816 ft: 13663 corp: 16/470b lim: 35 exec/s: 31 rss: 69Mb L: 35/35 MS: 1 ChangeBinInt- 00:08:18.721 [2024-10-01 14:28:01.121126] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.721 [2024-10-01 14:28:01.121156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.721 [2024-10-01 14:28:01.121260] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ba SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.721 [2024-10-01 14:28:01.121281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.721 [2024-10-01 14:28:01.121384] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.721 [2024-10-01 14:28:01.121403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.721 #32 NEW cov: 11816 ft: 13680 corp: 17/493b lim: 35 exec/s: 32 rss: 69Mb L: 23/35 MS: 1 CrossOver- 00:08:18.721 [2024-10-01 14:28:01.172138] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.721 [2024-10-01 14:28:01.172165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.721 [2024-10-01 14:28:01.172265] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.721 [2024-10-01 14:28:01.172282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.721 [2024-10-01 14:28:01.172367] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.721 [2024-10-01 14:28:01.172385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.721 [2024-10-01 14:28:01.172480] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.721 [2024-10-01 14:28:01.172499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.721 [2024-10-01 14:28:01.172586] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000f7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.721 [2024-10-01 14:28:01.172605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:18.721 #33 NEW cov: 11816 ft: 13693 corp: 18/528b lim: 35 exec/s: 33 rss: 69Mb L: 35/35 MS: 1 CopyPart- 00:08:18.721 [2024-10-01 14:28:01.231990] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.721 [2024-10-01 14:28:01.232017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.721 [2024-10-01 14:28:01.232115] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.721 [2024-10-01 14:28:01.232134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.721 [2024-10-01 14:28:01.232218] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.721 [2024-10-01 14:28:01.232236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.721 [2024-10-01 14:28:01.232323] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.721 [2024-10-01 14:28:01.232341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.980 #34 NEW cov: 11816 ft: 13707 corp: 19/560b lim: 35 exec/s: 34 rss: 69Mb L: 32/35 MS: 1 CopyPart- 00:08:18.980 [2024-10-01 14:28:01.282537] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.980 [2024-10-01 14:28:01.282568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.980 [2024-10-01 14:28:01.282665] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.980 [2024-10-01 14:28:01.282684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.980 [2024-10-01 14:28:01.282775] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.980 [2024-10-01 14:28:01.282793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.980 [2024-10-01 14:28:01.282887] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.980 [2024-10-01 14:28:01.282908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.980 [2024-10-01 14:28:01.283002] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.980 [2024-10-01 14:28:01.283021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:18.980 #35 NEW cov: 11816 ft: 13729 corp: 20/595b lim: 35 exec/s: 35 rss: 69Mb L: 35/35 MS: 1 ShuffleBytes- 00:08:18.980 [2024-10-01 14:28:01.332030] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.980 [2024-10-01 14:28:01.332060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.980 [2024-10-01 14:28:01.332145] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ba SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.980 [2024-10-01 14:28:01.332163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.980 [2024-10-01 14:28:01.332250] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ba SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.980 [2024-10-01 14:28:01.332267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.980 #36 NEW cov: 11816 ft: 13747 corp: 21/618b lim: 35 exec/s: 36 rss: 69Mb L: 23/35 MS: 1 ChangeBit- 00:08:18.980 [2024-10-01 14:28:01.382641] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.980 [2024-10-01 14:28:01.382668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.980 [2024-10-01 14:28:01.382756] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.980 [2024-10-01 14:28:01.382775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.980 [2024-10-01 14:28:01.382865] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.980 [2024-10-01 14:28:01.382881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.980 [2024-10-01 14:28:01.382975] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.980 [2024-10-01 14:28:01.382996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.980 #37 NEW cov: 11816 ft: 13769 corp: 22/650b lim: 35 exec/s: 37 rss: 69Mb L: 32/35 MS: 1 CrossOver- 00:08:18.980 [2024-10-01 14:28:01.442724] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.980 [2024-10-01 14:28:01.442750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.980 [2024-10-01 14:28:01.442850] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ba SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.980 [2024-10-01 14:28:01.442866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.980 [2024-10-01 14:28:01.442956] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ba SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.980 [2024-10-01 14:28:01.442975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.980 #38 NEW cov: 11816 ft: 13821 corp: 23/675b lim: 35 exec/s: 38 rss: 70Mb L: 25/35 MS: 1 CMP- DE: "\007\000"- 00:08:18.980 [2024-10-01 14:28:01.503005] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.980 [2024-10-01 14:28:01.503044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.980 [2024-10-01 14:28:01.503133] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ba SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.980 [2024-10-01 14:28:01.503152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.980 [2024-10-01 14:28:01.503248] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ba SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.980 [2024-10-01 14:28:01.503264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.238 #39 NEW cov: 11816 ft: 13858 corp: 24/698b lim: 35 exec/s: 39 rss: 70Mb L: 23/35 MS: 1 ShuffleBytes- 00:08:19.238 [2024-10-01 14:28:01.552816] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.238 [2024-10-01 14:28:01.552843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.238 [2024-10-01 14:28:01.552948] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ba SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.238 [2024-10-01 14:28:01.552966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.238 #40 NEW cov: 11816 ft: 14058 corp: 25/718b lim: 35 exec/s: 40 rss: 70Mb L: 20/35 MS: 1 EraseBytes- 00:08:19.238 [2024-10-01 14:28:01.613965] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.238 [2024-10-01 14:28:01.613991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.238 [2024-10-01 14:28:01.614088] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.238 [2024-10-01 14:28:01.614105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.238 [2024-10-01 14:28:01.614196] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:000000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.238 [2024-10-01 14:28:01.614212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.238 [2024-10-01 14:28:01.614302] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.238 [2024-10-01 14:28:01.614323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:19.238 [2024-10-01 14:28:01.614415] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000f7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.238 [2024-10-01 14:28:01.614432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:19.238 #41 NEW cov: 11816 ft: 14100 corp: 26/753b lim: 35 exec/s: 41 rss: 70Mb L: 35/35 MS: 1 PersAutoDict- DE: "\007\000"- 00:08:19.238 [2024-10-01 14:28:01.663953] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.238 [2024-10-01 14:28:01.663980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.238 [2024-10-01 14:28:01.664071] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.238 [2024-10-01 14:28:01.664089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.238 [2024-10-01 14:28:01.664174] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.238 [2024-10-01 14:28:01.664192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.238 [2024-10-01 14:28:01.664287] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ba SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.238 [2024-10-01 14:28:01.664307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:19.238 #42 NEW cov: 11816 ft: 14120 corp: 27/785b lim: 35 exec/s: 42 rss: 70Mb L: 32/35 MS: 1 CrossOver- 00:08:19.238 [2024-10-01 14:28:01.713981] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.238 [2024-10-01 14:28:01.714008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.238 [2024-10-01 14:28:01.714105] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.238 [2024-10-01 14:28:01.714126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.238 [2024-10-01 14:28:01.714224] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.238 [2024-10-01 14:28:01.714241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.238 [2024-10-01 14:28:01.714325] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.238 [2024-10-01 14:28:01.714342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:19.238 #43 NEW cov: 11816 ft: 14130 corp: 28/817b lim: 35 exec/s: 43 rss: 70Mb L: 32/35 MS: 1 ChangeBit- 00:08:19.497 [2024-10-01 14:28:01.764679] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.497 [2024-10-01 14:28:01.764708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.497 [2024-10-01 14:28:01.764799] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.497 [2024-10-01 14:28:01.764819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.497 [2024-10-01 14:28:01.764907] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.497 [2024-10-01 14:28:01.764922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.497 [2024-10-01 14:28:01.765012] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.497 [2024-10-01 14:28:01.765032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:19.497 [2024-10-01 14:28:01.765122] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:000000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.497 [2024-10-01 14:28:01.765138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:19.497 #44 NEW cov: 11816 ft: 14134 corp: 29/852b lim: 35 exec/s: 44 rss: 70Mb L: 35/35 MS: 1 CopyPart- 00:08:19.497 [2024-10-01 14:28:01.814433] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.497 [2024-10-01 14:28:01.814458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.497 [2024-10-01 14:28:01.814545] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:000000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.497 [2024-10-01 14:28:01.814561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.497 [2024-10-01 14:28:01.814651] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:000000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.497 [2024-10-01 14:28:01.814667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.497 [2024-10-01 14:28:01.814763] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.497 [2024-10-01 14:28:01.814781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:19.497 #45 NEW cov: 11816 ft: 14161 corp: 30/886b lim: 35 exec/s: 45 rss: 70Mb L: 34/35 MS: 1 PersAutoDict- DE: "\007\000"- 00:08:19.497 [2024-10-01 14:28:01.874273] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.497 [2024-10-01 14:28:01.874299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.497 [2024-10-01 14:28:01.874393] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ba SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.497 [2024-10-01 14:28:01.874411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.497 [2024-10-01 14:28:01.874501] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ba SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.497 [2024-10-01 14:28:01.874517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.497 #46 NEW cov: 11816 ft: 14185 corp: 31/909b lim: 35 exec/s: 46 rss: 70Mb L: 23/35 MS: 1 ChangeBit- 00:08:19.497 [2024-10-01 14:28:01.924368] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.497 [2024-10-01 14:28:01.924395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.497 [2024-10-01 14:28:01.924490] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ba SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.497 [2024-10-01 14:28:01.924506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.497 [2024-10-01 14:28:01.924594] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:000000ba SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.497 [2024-10-01 14:28:01.924610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.497 #47 NEW cov: 11816 ft: 14201 corp: 32/933b lim: 35 exec/s: 47 rss: 70Mb L: 24/35 MS: 1 InsertByte- 00:08:19.497 [2024-10-01 14:28:01.974870] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.497 [2024-10-01 14:28:01.974897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.497 [2024-10-01 14:28:01.974990] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.497 [2024-10-01 14:28:01.975010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.497 [2024-10-01 14:28:01.975110] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.497 [2024-10-01 14:28:01.975127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.497 [2024-10-01 14:28:01.975223] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.497 [2024-10-01 14:28:01.975243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:19.497 #48 NEW cov: 11816 ft: 14212 corp: 33/963b lim: 35 exec/s: 24 rss: 70Mb L: 30/35 MS: 1 ChangeByte- 00:08:19.497 #48 DONE cov: 11816 ft: 14212 corp: 33/963b lim: 35 exec/s: 24 rss: 70Mb 00:08:19.497 ###### Recommended dictionary. ###### 00:08:19.497 "\007\000" # Uses: 2 00:08:19.497 ###### End of recommended dictionary. ###### 00:08:19.497 Done 48 runs in 2 second(s) 00:08:19.756 14:28:02 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_14.conf 00:08:19.756 14:28:02 -- ../common.sh@72 -- # (( i++ )) 00:08:19.756 14:28:02 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:19.756 14:28:02 -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:08:19.756 14:28:02 -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:08:19.756 14:28:02 -- nvmf/run.sh@24 -- # local timen=1 00:08:19.756 14:28:02 -- nvmf/run.sh@25 -- # local core=0x1 00:08:19.756 14:28:02 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:19.756 14:28:02 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:08:19.756 14:28:02 -- nvmf/run.sh@29 -- # printf %02d 15 00:08:19.756 14:28:02 -- nvmf/run.sh@29 -- # port=4415 00:08:19.756 14:28:02 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:19.756 14:28:02 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:08:19.756 14:28:02 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:19.756 14:28:02 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 -r /var/tmp/spdk15.sock 00:08:19.756 [2024-10-01 14:28:02.166782] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:19.756 [2024-10-01 14:28:02.166850] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid706065 ] 00:08:19.756 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.014 [2024-10-01 14:28:02.487665] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.272 [2024-10-01 14:28:02.580939] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:20.272 [2024-10-01 14:28:02.581085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.272 [2024-10-01 14:28:02.639559] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.272 [2024-10-01 14:28:02.655753] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:08:20.272 INFO: Running with entropic power schedule (0xFF, 100). 00:08:20.272 INFO: Seed: 2545230545 00:08:20.272 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:20.272 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:20.272 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:20.272 INFO: A corpus is not provided, starting from an empty corpus 00:08:20.272 #2 INITED exec/s: 0 rss: 61Mb 00:08:20.272 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:20.272 This may also happen if the target rejected all inputs we tried so far 00:08:20.273 [2024-10-01 14:28:02.732267] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.273 [2024-10-01 14:28:02.732312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.530 NEW_FUNC[1/670]: 0x44fe48 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:08:20.530 NEW_FUNC[2/670]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:20.530 #12 NEW cov: 11560 ft: 11561 corp: 2/9b lim: 35 exec/s: 0 rss: 68Mb L: 8/8 MS: 5 ShuffleBytes-ChangeBit-ShuffleBytes-ChangeBinInt-InsertRepeatedBytes- 00:08:20.530 [2024-10-01 14:28:03.053519] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.530 [2024-10-01 14:28:03.053556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.788 #13 NEW cov: 11673 ft: 12179 corp: 3/18b lim: 35 exec/s: 0 rss: 68Mb L: 9/9 MS: 1 InsertByte- 00:08:20.788 [2024-10-01 14:28:03.114532] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005d3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.788 [2024-10-01 14:28:03.114560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.788 [2024-10-01 14:28:03.114658] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.788 [2024-10-01 14:28:03.114674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:20.788 [2024-10-01 14:28:03.114772] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.788 [2024-10-01 14:28:03.114788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:20.788 [2024-10-01 14:28:03.114876] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.788 [2024-10-01 14:28:03.114892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:20.788 #22 NEW cov: 11679 ft: 13052 corp: 4/52b lim: 35 exec/s: 0 rss: 68Mb L: 34/34 MS: 4 InsertByte-ChangeBit-CopyPart-InsertRepeatedBytes- 00:08:20.788 [2024-10-01 14:28:03.163800] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.788 [2024-10-01 14:28:03.163826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.788 #23 NEW cov: 11764 ft: 13288 corp: 5/61b lim: 35 exec/s: 0 rss: 68Mb L: 9/34 MS: 1 ChangeBit- 00:08:20.788 [2024-10-01 14:28:03.224967] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005d3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.788 [2024-10-01 14:28:03.224993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.788 [2024-10-01 14:28:03.225099] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.788 [2024-10-01 14:28:03.225115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:20.788 [2024-10-01 14:28:03.225211] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.788 [2024-10-01 14:28:03.225226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:20.789 [2024-10-01 14:28:03.225323] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.789 [2024-10-01 14:28:03.225339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:20.789 #24 NEW cov: 11764 ft: 13423 corp: 6/95b lim: 35 exec/s: 0 rss: 68Mb L: 34/34 MS: 1 ChangeByte- 00:08:20.789 [2024-10-01 14:28:03.284915] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.789 [2024-10-01 14:28:03.284943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.789 [2024-10-01 14:28:03.285043] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.789 [2024-10-01 14:28:03.285059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:20.789 #25 NEW cov: 11764 ft: 13675 corp: 7/110b lim: 35 exec/s: 0 rss: 68Mb L: 15/34 MS: 1 InsertRepeatedBytes- 00:08:21.047 [2024-10-01 14:28:03.334988] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.047 [2024-10-01 14:28:03.335014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.047 #26 NEW cov: 11764 ft: 13761 corp: 8/118b lim: 35 exec/s: 0 rss: 68Mb L: 8/34 MS: 1 CrossOver- 00:08:21.047 [2024-10-01 14:28:03.386133] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005d3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.047 [2024-10-01 14:28:03.386158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.047 [2024-10-01 14:28:03.386263] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.047 [2024-10-01 14:28:03.386280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.047 [2024-10-01 14:28:03.386375] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.047 [2024-10-01 14:28:03.386392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.047 [2024-10-01 14:28:03.386478] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.047 [2024-10-01 14:28:03.386492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.047 #27 NEW cov: 11764 ft: 13817 corp: 9/152b lim: 35 exec/s: 0 rss: 69Mb L: 34/34 MS: 1 ChangeByte- 00:08:21.047 [2024-10-01 14:28:03.446941] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.047 [2024-10-01 14:28:03.446967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.047 [2024-10-01 14:28:03.447058] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.047 [2024-10-01 14:28:03.447074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.047 [2024-10-01 14:28:03.447159] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.047 [2024-10-01 14:28:03.447176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.047 [2024-10-01 14:28:03.447272] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.048 [2024-10-01 14:28:03.447288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:21.048 NEW_FUNC[1/1]: 0x46fd38 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:08:21.048 #28 NEW cov: 11778 ft: 13906 corp: 10/187b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 CrossOver- 00:08:21.048 [2024-10-01 14:28:03.495889] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.048 [2024-10-01 14:28:03.495917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.048 #29 NEW cov: 11778 ft: 13938 corp: 11/196b lim: 35 exec/s: 0 rss: 69Mb L: 9/35 MS: 1 InsertByte- 00:08:21.048 [2024-10-01 14:28:03.546340] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.048 [2024-10-01 14:28:03.546366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.048 #30 NEW cov: 11778 ft: 13993 corp: 12/205b lim: 35 exec/s: 0 rss: 69Mb L: 9/35 MS: 1 CopyPart- 00:08:21.306 [2024-10-01 14:28:03.597953] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.306 [2024-10-01 14:28:03.597981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.306 [2024-10-01 14:28:03.598074] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.306 [2024-10-01 14:28:03.598088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.306 [2024-10-01 14:28:03.598196] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.306 [2024-10-01 14:28:03.598210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.306 [2024-10-01 14:28:03.598304] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.306 [2024-10-01 14:28:03.598319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:21.306 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:21.306 #31 NEW cov: 11795 ft: 14034 corp: 13/240b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 ChangeBinInt- 00:08:21.306 [2024-10-01 14:28:03.658125] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.306 [2024-10-01 14:28:03.658152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.306 [2024-10-01 14:28:03.658246] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.306 [2024-10-01 14:28:03.658263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.306 [2024-10-01 14:28:03.658362] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.306 [2024-10-01 14:28:03.658375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.306 #36 NEW cov: 11795 ft: 14183 corp: 14/269b lim: 35 exec/s: 0 rss: 69Mb L: 29/35 MS: 5 CrossOver-CopyPart-CrossOver-ChangeByte-InsertRepeatedBytes- 00:08:21.306 [2024-10-01 14:28:03.708162] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005d3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.306 [2024-10-01 14:28:03.708187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.306 [2024-10-01 14:28:03.708281] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.306 [2024-10-01 14:28:03.708298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.306 [2024-10-01 14:28:03.708394] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.306 [2024-10-01 14:28:03.708413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.306 [2024-10-01 14:28:03.708505] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.306 [2024-10-01 14:28:03.708520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.306 #37 NEW cov: 11795 ft: 14197 corp: 15/303b lim: 35 exec/s: 37 rss: 69Mb L: 34/35 MS: 1 CopyPart- 00:08:21.306 [2024-10-01 14:28:03.768620] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005d3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.306 [2024-10-01 14:28:03.768649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.306 [2024-10-01 14:28:03.768747] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.306 [2024-10-01 14:28:03.768764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.306 [2024-10-01 14:28:03.768860] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.306 [2024-10-01 14:28:03.768876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.306 [2024-10-01 14:28:03.768948] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.306 [2024-10-01 14:28:03.768965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.306 #38 NEW cov: 11795 ft: 14230 corp: 16/337b lim: 35 exec/s: 38 rss: 69Mb L: 34/35 MS: 1 ChangeBit- 00:08:21.306 [2024-10-01 14:28:03.818070] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.306 [2024-10-01 14:28:03.818096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.565 #40 NEW cov: 11795 ft: 14280 corp: 17/347b lim: 35 exec/s: 40 rss: 69Mb L: 10/35 MS: 2 EraseBytes-CrossOver- 00:08:21.565 [2024-10-01 14:28:03.869090] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005d3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.565 [2024-10-01 14:28:03.869118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.565 [2024-10-01 14:28:03.869211] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.565 [2024-10-01 14:28:03.869226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.565 [2024-10-01 14:28:03.869323] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.565 [2024-10-01 14:28:03.869340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.565 [2024-10-01 14:28:03.869432] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.565 [2024-10-01 14:28:03.869448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.565 #41 NEW cov: 11795 ft: 14304 corp: 18/381b lim: 35 exec/s: 41 rss: 69Mb L: 34/35 MS: 1 ChangeByte- 00:08:21.565 [2024-10-01 14:28:03.929759] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005d3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.565 [2024-10-01 14:28:03.929788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.565 [2024-10-01 14:28:03.929905] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.565 [2024-10-01 14:28:03.929923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.565 [2024-10-01 14:28:03.930018] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.565 [2024-10-01 14:28:03.930035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.565 [2024-10-01 14:28:03.930132] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.565 [2024-10-01 14:28:03.930149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.565 [2024-10-01 14:28:03.930235] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.565 [2024-10-01 14:28:03.930252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:21.565 #42 NEW cov: 11795 ft: 14335 corp: 19/416b lim: 35 exec/s: 42 rss: 69Mb L: 35/35 MS: 1 InsertByte- 00:08:21.565 [2024-10-01 14:28:03.979021] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005d3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.565 [2024-10-01 14:28:03.979052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.565 #43 NEW cov: 11795 ft: 14385 corp: 20/428b lim: 35 exec/s: 43 rss: 69Mb L: 12/35 MS: 1 CrossOver- 00:08:21.565 [2024-10-01 14:28:04.029347] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.565 [2024-10-01 14:28:04.029373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.565 #44 NEW cov: 11795 ft: 14415 corp: 21/436b lim: 35 exec/s: 44 rss: 69Mb L: 8/35 MS: 1 ChangeByte- 00:08:21.565 [2024-10-01 14:28:04.080937] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.565 [2024-10-01 14:28:04.080961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.565 [2024-10-01 14:28:04.081070] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.565 [2024-10-01 14:28:04.081086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.565 [2024-10-01 14:28:04.081185] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.565 [2024-10-01 14:28:04.081200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.565 [2024-10-01 14:28:04.081291] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.565 [2024-10-01 14:28:04.081306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:21.827 #45 NEW cov: 11795 ft: 14434 corp: 22/471b lim: 35 exec/s: 45 rss: 69Mb L: 35/35 MS: 1 ChangeBinInt- 00:08:21.827 [2024-10-01 14:28:04.140724] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000022d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.827 [2024-10-01 14:28:04.140750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.827 [2024-10-01 14:28:04.140865] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000550 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.827 [2024-10-01 14:28:04.140893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.827 [2024-10-01 14:28:04.140981] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.827 [2024-10-01 14:28:04.140998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.827 [2024-10-01 14:28:04.141089] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.827 [2024-10-01 14:28:04.141108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.827 #46 NEW cov: 11795 ft: 14449 corp: 23/505b lim: 35 exec/s: 46 rss: 69Mb L: 34/35 MS: 1 ChangeBinInt- 00:08:21.827 [2024-10-01 14:28:04.201034] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.827 [2024-10-01 14:28:04.201058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.827 [2024-10-01 14:28:04.201154] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.827 [2024-10-01 14:28:04.201169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.827 [2024-10-01 14:28:04.201263] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.827 [2024-10-01 14:28:04.201278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.827 #47 NEW cov: 11795 ft: 14494 corp: 24/535b lim: 35 exec/s: 47 rss: 69Mb L: 30/35 MS: 1 EraseBytes- 00:08:21.827 [2024-10-01 14:28:04.261337] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005d3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.827 [2024-10-01 14:28:04.261361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.827 [2024-10-01 14:28:04.261446] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.827 [2024-10-01 14:28:04.261461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.827 [2024-10-01 14:28:04.261524] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.827 [2024-10-01 14:28:04.261540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.827 [2024-10-01 14:28:04.261627] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.827 [2024-10-01 14:28:04.261643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.827 [2024-10-01 14:28:04.261754] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.827 [2024-10-01 14:28:04.261769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:21.827 #48 NEW cov: 11795 ft: 14506 corp: 25/570b lim: 35 exec/s: 48 rss: 69Mb L: 35/35 MS: 1 ChangeBit- 00:08:21.827 [2024-10-01 14:28:04.321369] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005d3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.827 [2024-10-01 14:28:04.321393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.827 [2024-10-01 14:28:04.321496] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.827 [2024-10-01 14:28:04.321516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.827 [2024-10-01 14:28:04.321608] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000560 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.827 [2024-10-01 14:28:04.321623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.827 [2024-10-01 14:28:04.321724] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.827 [2024-10-01 14:28:04.321740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.827 #49 NEW cov: 11795 ft: 14521 corp: 26/604b lim: 35 exec/s: 49 rss: 69Mb L: 34/35 MS: 1 ChangeBinInt- 00:08:22.118 [2024-10-01 14:28:04.371336] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005d3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.118 [2024-10-01 14:28:04.371363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:22.118 [2024-10-01 14:28:04.371465] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.118 [2024-10-01 14:28:04.371480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:22.118 #50 NEW cov: 11795 ft: 14552 corp: 27/620b lim: 35 exec/s: 50 rss: 69Mb L: 16/35 MS: 1 CrossOver- 00:08:22.118 [2024-10-01 14:28:04.432310] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.118 [2024-10-01 14:28:04.432334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:22.118 [2024-10-01 14:28:04.432425] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.118 [2024-10-01 14:28:04.432441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:22.118 [2024-10-01 14:28:04.432532] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.118 [2024-10-01 14:28:04.432548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:22.118 #51 NEW cov: 11795 ft: 14571 corp: 28/651b lim: 35 exec/s: 51 rss: 69Mb L: 31/35 MS: 1 InsertByte- 00:08:22.118 [2024-10-01 14:28:04.492882] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.118 [2024-10-01 14:28:04.492907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:22.118 [2024-10-01 14:28:04.492990] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.118 [2024-10-01 14:28:04.493006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:22.118 [2024-10-01 14:28:04.493095] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.118 [2024-10-01 14:28:04.493111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:22.118 [2024-10-01 14:28:04.493203] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.118 [2024-10-01 14:28:04.493217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:22.118 #52 NEW cov: 11795 ft: 14598 corp: 29/686b lim: 35 exec/s: 52 rss: 70Mb L: 35/35 MS: 1 ChangeBinInt- 00:08:22.118 [2024-10-01 14:28:04.543270] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.118 [2024-10-01 14:28:04.543295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:22.118 [2024-10-01 14:28:04.543393] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.118 [2024-10-01 14:28:04.543409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:22.118 [2024-10-01 14:28:04.543495] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.118 [2024-10-01 14:28:04.543509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:22.118 [2024-10-01 14:28:04.543611] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.118 [2024-10-01 14:28:04.543625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:22.118 #53 NEW cov: 11795 ft: 14644 corp: 30/721b lim: 35 exec/s: 53 rss: 70Mb L: 35/35 MS: 1 ChangeBit- 00:08:22.118 [2024-10-01 14:28:04.602580] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005d3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.118 [2024-10-01 14:28:04.602605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:22.118 [2024-10-01 14:28:04.602715] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.118 [2024-10-01 14:28:04.602734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:22.118 #59 NEW cov: 11802 ft: 14670 corp: 31/737b lim: 35 exec/s: 59 rss: 70Mb L: 16/35 MS: 1 ShuffleBytes- 00:08:22.392 [2024-10-01 14:28:04.662618] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.392 [2024-10-01 14:28:04.662644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:22.392 #60 NEW cov: 11802 ft: 14723 corp: 32/745b lim: 35 exec/s: 60 rss: 70Mb L: 8/35 MS: 1 ChangeByte- 00:08:22.392 [2024-10-01 14:28:04.724224] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.392 [2024-10-01 14:28:04.724248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:22.392 [2024-10-01 14:28:04.724338] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.392 [2024-10-01 14:28:04.724353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:22.392 [2024-10-01 14:28:04.724443] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.392 [2024-10-01 14:28:04.724459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:22.392 [2024-10-01 14:28:04.724552] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000005aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.392 [2024-10-01 14:28:04.724567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:22.392 #61 NEW cov: 11802 ft: 14737 corp: 33/780b lim: 35 exec/s: 30 rss: 70Mb L: 35/35 MS: 1 ChangeBit- 00:08:22.392 #61 DONE cov: 11802 ft: 14737 corp: 33/780b lim: 35 exec/s: 30 rss: 70Mb 00:08:22.392 Done 61 runs in 2 second(s) 00:08:22.392 14:28:04 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_15.conf 00:08:22.392 14:28:04 -- ../common.sh@72 -- # (( i++ )) 00:08:22.392 14:28:04 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:22.392 14:28:04 -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:08:22.392 14:28:04 -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:08:22.392 14:28:04 -- nvmf/run.sh@24 -- # local timen=1 00:08:22.392 14:28:04 -- nvmf/run.sh@25 -- # local core=0x1 00:08:22.392 14:28:04 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:08:22.392 14:28:04 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:08:22.392 14:28:04 -- nvmf/run.sh@29 -- # printf %02d 16 00:08:22.392 14:28:04 -- nvmf/run.sh@29 -- # port=4416 00:08:22.392 14:28:04 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:08:22.392 14:28:04 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:08:22.392 14:28:04 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:22.392 14:28:04 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 -r /var/tmp/spdk16.sock 00:08:22.713 [2024-10-01 14:28:04.921811] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:22.713 [2024-10-01 14:28:04.921881] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid706435 ] 00:08:22.713 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.971 [2024-10-01 14:28:05.231872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.971 [2024-10-01 14:28:05.309670] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:22.971 [2024-10-01 14:28:05.309806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.971 [2024-10-01 14:28:05.368837] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.971 [2024-10-01 14:28:05.385036] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:08:22.971 INFO: Running with entropic power schedule (0xFF, 100). 00:08:22.971 INFO: Seed: 979265851 00:08:22.971 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:22.971 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:22.971 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:08:22.971 INFO: A corpus is not provided, starting from an empty corpus 00:08:22.971 #2 INITED exec/s: 0 rss: 61Mb 00:08:22.971 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:22.971 This may also happen if the target rejected all inputs we tried so far 00:08:22.971 [2024-10-01 14:28:05.440244] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744072048607231 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.971 [2024-10-01 14:28:05.440275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.229 NEW_FUNC[1/671]: 0x451308 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:08:23.229 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:23.229 #5 NEW cov: 11663 ft: 11664 corp: 2/40b lim: 105 exec/s: 0 rss: 68Mb L: 39/39 MS: 3 ChangeBit-ChangeByte-InsertRepeatedBytes- 00:08:23.488 [2024-10-01 14:28:05.761654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6438275381263423833 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.488 [2024-10-01 14:28:05.761717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.488 [2024-10-01 14:28:05.761815] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.488 [2024-10-01 14:28:05.761856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.488 [2024-10-01 14:28:05.761937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.488 [2024-10-01 14:28:05.761966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:23.488 [2024-10-01 14:28:05.762045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.488 [2024-10-01 14:28:05.762074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:23.488 #19 NEW cov: 11776 ft: 12943 corp: 3/141b lim: 105 exec/s: 0 rss: 68Mb L: 101/101 MS: 4 CrossOver-CopyPart-ShuffleBytes-InsertRepeatedBytes- 00:08:23.488 [2024-10-01 14:28:05.811255] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.488 [2024-10-01 14:28:05.811285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.488 [2024-10-01 14:28:05.811335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.488 [2024-10-01 14:28:05.811350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.488 #20 NEW cov: 11782 ft: 13490 corp: 4/192b lim: 105 exec/s: 0 rss: 68Mb L: 51/101 MS: 1 InsertRepeatedBytes- 00:08:23.488 [2024-10-01 14:28:05.851602] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6438275381263423833 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.488 [2024-10-01 14:28:05.851631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.488 [2024-10-01 14:28:05.851674] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.488 [2024-10-01 14:28:05.851690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.488 [2024-10-01 14:28:05.851747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.488 [2024-10-01 14:28:05.851763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:23.488 [2024-10-01 14:28:05.851815] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.488 [2024-10-01 14:28:05.851830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:23.488 #21 NEW cov: 11867 ft: 13751 corp: 5/293b lim: 105 exec/s: 0 rss: 68Mb L: 101/101 MS: 1 ShuffleBytes- 00:08:23.488 [2024-10-01 14:28:05.901530] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.488 [2024-10-01 14:28:05.901558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.488 [2024-10-01 14:28:05.901611] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.488 [2024-10-01 14:28:05.901630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.488 #22 NEW cov: 11867 ft: 13906 corp: 6/344b lim: 105 exec/s: 0 rss: 68Mb L: 51/101 MS: 1 ShuffleBytes- 00:08:23.488 [2024-10-01 14:28:05.951877] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6438177142476462425 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.488 [2024-10-01 14:28:05.951907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.488 [2024-10-01 14:28:05.951949] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.488 [2024-10-01 14:28:05.951964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.488 [2024-10-01 14:28:05.952018] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.488 [2024-10-01 14:28:05.952034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:23.488 [2024-10-01 14:28:05.952088] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.488 [2024-10-01 14:28:05.952104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:23.488 #28 NEW cov: 11867 ft: 13972 corp: 7/448b lim: 105 exec/s: 0 rss: 68Mb L: 104/104 MS: 1 InsertRepeatedBytes- 00:08:23.488 [2024-10-01 14:28:05.992017] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6438275381263423833 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.488 [2024-10-01 14:28:05.992047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.488 [2024-10-01 14:28:05.992080] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.488 [2024-10-01 14:28:05.992095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.488 [2024-10-01 14:28:05.992150] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.488 [2024-10-01 14:28:05.992166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:23.489 [2024-10-01 14:28:05.992218] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.489 [2024-10-01 14:28:05.992233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:23.747 #29 NEW cov: 11867 ft: 14099 corp: 8/549b lim: 105 exec/s: 0 rss: 68Mb L: 101/104 MS: 1 ChangeByte- 00:08:23.747 [2024-10-01 14:28:06.041928] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.747 [2024-10-01 14:28:06.041957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.747 [2024-10-01 14:28:06.041994] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.747 [2024-10-01 14:28:06.042009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.747 #30 NEW cov: 11867 ft: 14213 corp: 9/601b lim: 105 exec/s: 0 rss: 68Mb L: 52/104 MS: 1 InsertByte- 00:08:23.747 [2024-10-01 14:28:06.081983] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.747 [2024-10-01 14:28:06.082011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.747 [2024-10-01 14:28:06.082049] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.747 [2024-10-01 14:28:06.082064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.747 #31 NEW cov: 11867 ft: 14382 corp: 10/652b lim: 105 exec/s: 0 rss: 68Mb L: 51/104 MS: 1 ChangeBit- 00:08:23.747 [2024-10-01 14:28:06.122134] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11791448172606497571 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.747 [2024-10-01 14:28:06.122162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.747 [2024-10-01 14:28:06.122214] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.747 [2024-10-01 14:28:06.122230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.747 #32 NEW cov: 11867 ft: 14402 corp: 11/703b lim: 105 exec/s: 0 rss: 68Mb L: 51/104 MS: 1 ChangeBit- 00:08:23.747 [2024-10-01 14:28:06.162237] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.747 [2024-10-01 14:28:06.162266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.748 [2024-10-01 14:28:06.162311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.748 [2024-10-01 14:28:06.162327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.748 #33 NEW cov: 11867 ft: 14406 corp: 12/754b lim: 105 exec/s: 0 rss: 69Mb L: 51/104 MS: 1 ChangeBinInt- 00:08:23.748 [2024-10-01 14:28:06.202553] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6438275381263423833 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.748 [2024-10-01 14:28:06.202581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.748 [2024-10-01 14:28:06.202619] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.748 [2024-10-01 14:28:06.202635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.748 [2024-10-01 14:28:06.202689] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.748 [2024-10-01 14:28:06.202704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:23.748 [2024-10-01 14:28:06.202762] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.748 [2024-10-01 14:28:06.202777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:23.748 #34 NEW cov: 11867 ft: 14438 corp: 13/856b lim: 105 exec/s: 0 rss: 69Mb L: 102/104 MS: 1 InsertByte- 00:08:23.748 [2024-10-01 14:28:06.242689] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.748 [2024-10-01 14:28:06.242724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.748 [2024-10-01 14:28:06.242765] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744072159994787 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.748 [2024-10-01 14:28:06.242779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.748 [2024-10-01 14:28:06.242831] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.748 [2024-10-01 14:28:06.242847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:23.748 [2024-10-01 14:28:06.242899] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11791448174156030883 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.748 [2024-10-01 14:28:06.242913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.006 #40 NEW cov: 11867 ft: 14478 corp: 14/947b lim: 105 exec/s: 0 rss: 69Mb L: 91/104 MS: 1 InsertRepeatedBytes- 00:08:24.006 [2024-10-01 14:28:06.292851] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6438275381263423833 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.006 [2024-10-01 14:28:06.292879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.006 [2024-10-01 14:28:06.292917] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.006 [2024-10-01 14:28:06.292930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.006 [2024-10-01 14:28:06.292981] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.006 [2024-10-01 14:28:06.292997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.006 [2024-10-01 14:28:06.293049] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:6438275382588817241 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.006 [2024-10-01 14:28:06.293067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.006 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:24.006 #41 NEW cov: 11890 ft: 14538 corp: 15/1049b lim: 105 exec/s: 0 rss: 69Mb L: 102/104 MS: 1 InsertByte- 00:08:24.006 [2024-10-01 14:28:06.332955] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.006 [2024-10-01 14:28:06.332985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.006 [2024-10-01 14:28:06.333019] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1446803456761533460 len:5141 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.006 [2024-10-01 14:28:06.333037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.006 [2024-10-01 14:28:06.333082] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:1446803456761533460 len:5141 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.006 [2024-10-01 14:28:06.333098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.006 [2024-10-01 14:28:06.333150] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.006 [2024-10-01 14:28:06.333170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.006 #42 NEW cov: 11890 ft: 14591 corp: 16/1149b lim: 105 exec/s: 0 rss: 69Mb L: 100/104 MS: 1 InsertRepeatedBytes- 00:08:24.006 [2024-10-01 14:28:06.372838] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.006 [2024-10-01 14:28:06.372869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.006 [2024-10-01 14:28:06.372926] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.006 [2024-10-01 14:28:06.372942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.006 #43 NEW cov: 11890 ft: 14630 corp: 17/1200b lim: 105 exec/s: 0 rss: 69Mb L: 51/104 MS: 1 CrossOver- 00:08:24.006 [2024-10-01 14:28:06.412934] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.006 [2024-10-01 14:28:06.412963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.006 [2024-10-01 14:28:06.413017] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.006 [2024-10-01 14:28:06.413033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.006 #44 NEW cov: 11890 ft: 14726 corp: 18/1251b lim: 105 exec/s: 44 rss: 69Mb L: 51/104 MS: 1 ChangeBinInt- 00:08:24.006 [2024-10-01 14:28:06.453046] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744072048607231 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.006 [2024-10-01 14:28:06.453075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.006 [2024-10-01 14:28:06.453112] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.006 [2024-10-01 14:28:06.453128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.006 #45 NEW cov: 11890 ft: 14765 corp: 19/1311b lim: 105 exec/s: 45 rss: 69Mb L: 60/104 MS: 1 InsertRepeatedBytes- 00:08:24.006 [2024-10-01 14:28:06.493181] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41818 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.006 [2024-10-01 14:28:06.493209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.006 [2024-10-01 14:28:06.493251] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.006 [2024-10-01 14:28:06.493266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.006 #46 NEW cov: 11890 ft: 14804 corp: 20/1363b lim: 105 exec/s: 46 rss: 69Mb L: 52/104 MS: 1 CrossOver- 00:08:24.263 [2024-10-01 14:28:06.533164] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.263 [2024-10-01 14:28:06.533194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.263 #47 NEW cov: 11890 ft: 14850 corp: 21/1400b lim: 105 exec/s: 47 rss: 69Mb L: 37/104 MS: 1 EraseBytes- 00:08:24.263 [2024-10-01 14:28:06.573643] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.263 [2024-10-01 14:28:06.573675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.263 [2024-10-01 14:28:06.573712] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744072159994787 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.263 [2024-10-01 14:28:06.573734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.263 [2024-10-01 14:28:06.573787] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.263 [2024-10-01 14:28:06.573803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.263 [2024-10-01 14:28:06.573856] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11791448174156030883 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.263 [2024-10-01 14:28:06.573873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.263 #48 NEW cov: 11890 ft: 14879 corp: 22/1491b lim: 105 exec/s: 48 rss: 70Mb L: 91/104 MS: 1 CopyPart- 00:08:24.263 [2024-10-01 14:28:06.623575] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.263 [2024-10-01 14:28:06.623603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.263 [2024-10-01 14:28:06.623658] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11791448172606497571 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.263 [2024-10-01 14:28:06.623675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.263 #49 NEW cov: 11890 ft: 14887 corp: 23/1542b lim: 105 exec/s: 49 rss: 70Mb L: 51/104 MS: 1 ChangeBit- 00:08:24.263 [2024-10-01 14:28:06.663615] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.263 [2024-10-01 14:28:06.663644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.263 [2024-10-01 14:28:06.663685] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.263 [2024-10-01 14:28:06.663702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.263 #50 NEW cov: 11890 ft: 14892 corp: 24/1588b lim: 105 exec/s: 50 rss: 70Mb L: 46/104 MS: 1 EraseBytes- 00:08:24.263 [2024-10-01 14:28:06.703660] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744072048607231 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.263 [2024-10-01 14:28:06.703689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.263 #51 NEW cov: 11890 ft: 14912 corp: 25/1625b lim: 105 exec/s: 51 rss: 70Mb L: 37/104 MS: 1 EraseBytes- 00:08:24.263 [2024-10-01 14:28:06.743891] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11791268252131500963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.263 [2024-10-01 14:28:06.743919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.263 [2024-10-01 14:28:06.743971] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.263 [2024-10-01 14:28:06.743988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.263 #52 NEW cov: 11890 ft: 14921 corp: 26/1671b lim: 105 exec/s: 52 rss: 70Mb L: 46/104 MS: 1 ChangeBinInt- 00:08:24.519 [2024-10-01 14:28:06.794027] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:118117938451513344 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.519 [2024-10-01 14:28:06.794056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.519 [2024-10-01 14:28:06.794108] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.519 [2024-10-01 14:28:06.794124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.519 #53 NEW cov: 11890 ft: 14933 corp: 27/1726b lim: 105 exec/s: 53 rss: 70Mb L: 55/104 MS: 1 CMP- DE: "\001\000\000\001"- 00:08:24.519 [2024-10-01 14:28:06.834347] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6438275381263423833 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.519 [2024-10-01 14:28:06.834375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.519 [2024-10-01 14:28:06.834422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.519 [2024-10-01 14:28:06.834438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.519 [2024-10-01 14:28:06.834490] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.519 [2024-10-01 14:28:06.834506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.519 [2024-10-01 14:28:06.834558] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.519 [2024-10-01 14:28:06.834574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.519 #54 NEW cov: 11890 ft: 14970 corp: 28/1827b lim: 105 exec/s: 54 rss: 70Mb L: 101/104 MS: 1 ChangeBinInt- 00:08:24.519 [2024-10-01 14:28:06.874521] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6438275381263423833 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.519 [2024-10-01 14:28:06.874549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.519 [2024-10-01 14:28:06.874596] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.519 [2024-10-01 14:28:06.874611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.519 [2024-10-01 14:28:06.874663] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.519 [2024-10-01 14:28:06.874679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.519 [2024-10-01 14:28:06.874736] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.519 [2024-10-01 14:28:06.874751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.520 #55 NEW cov: 11890 ft: 14989 corp: 29/1928b lim: 105 exec/s: 55 rss: 70Mb L: 101/104 MS: 1 CopyPart- 00:08:24.520 [2024-10-01 14:28:06.914468] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41818 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.520 [2024-10-01 14:28:06.914498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.520 [2024-10-01 14:28:06.914534] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.520 [2024-10-01 14:28:06.914550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.520 [2024-10-01 14:28:06.914604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6438275700416403801 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.520 [2024-10-01 14:28:06.914619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.520 #56 NEW cov: 11890 ft: 15299 corp: 30/2000b lim: 105 exec/s: 56 rss: 70Mb L: 72/104 MS: 1 CopyPart- 00:08:24.520 [2024-10-01 14:28:06.964402] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744072048607231 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.520 [2024-10-01 14:28:06.964430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.520 #57 NEW cov: 11890 ft: 15316 corp: 31/2039b lim: 105 exec/s: 57 rss: 70Mb L: 39/104 MS: 1 ChangeBit- 00:08:24.520 [2024-10-01 14:28:07.004887] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.520 [2024-10-01 14:28:07.004916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.520 [2024-10-01 14:28:07.004955] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1446803456441450497 len:5141 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.520 [2024-10-01 14:28:07.004970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.520 [2024-10-01 14:28:07.005022] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:1446803456761533460 len:5141 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.520 [2024-10-01 14:28:07.005038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.520 [2024-10-01 14:28:07.005095] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11791448170197947555 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.520 [2024-10-01 14:28:07.005110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.520 #58 NEW cov: 11890 ft: 15364 corp: 32/2143b lim: 105 exec/s: 58 rss: 70Mb L: 104/104 MS: 1 PersAutoDict- DE: "\001\000\000\001"- 00:08:24.778 [2024-10-01 14:28:07.055212] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6438275381263423833 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.778 [2024-10-01 14:28:07.055241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.778 [2024-10-01 14:28:07.055287] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.778 [2024-10-01 14:28:07.055305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.778 [2024-10-01 14:28:07.055359] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.778 [2024-10-01 14:28:07.055374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.778 [2024-10-01 14:28:07.055428] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.778 [2024-10-01 14:28:07.055446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.778 [2024-10-01 14:28:07.055498] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:97207107245375488 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.778 [2024-10-01 14:28:07.055514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:24.778 #59 NEW cov: 11890 ft: 15406 corp: 33/2248b lim: 105 exec/s: 59 rss: 70Mb L: 105/105 MS: 1 PersAutoDict- DE: "\001\000\000\001"- 00:08:24.778 [2024-10-01 14:28:07.105178] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11791269351643128739 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.778 [2024-10-01 14:28:07.105206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.778 [2024-10-01 14:28:07.105246] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1446803456761533460 len:5141 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.778 [2024-10-01 14:28:07.105259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.778 [2024-10-01 14:28:07.105310] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:1446803456761533460 len:5141 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.778 [2024-10-01 14:28:07.105324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.778 [2024-10-01 14:28:07.105378] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11791448170197947555 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.778 [2024-10-01 14:28:07.105393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.778 #60 NEW cov: 11890 ft: 15420 corp: 34/2352b lim: 105 exec/s: 60 rss: 70Mb L: 104/105 MS: 1 PersAutoDict- DE: "\001\000\000\001"- 00:08:24.778 [2024-10-01 14:28:07.145159] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.778 [2024-10-01 14:28:07.145187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.778 [2024-10-01 14:28:07.145223] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.778 [2024-10-01 14:28:07.145239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.778 [2024-10-01 14:28:07.145292] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11791448172598109091 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.778 [2024-10-01 14:28:07.145309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.778 #61 NEW cov: 11890 ft: 15425 corp: 35/2430b lim: 105 exec/s: 61 rss: 70Mb L: 78/105 MS: 1 CrossOver- 00:08:24.778 [2024-10-01 14:28:07.185407] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.778 [2024-10-01 14:28:07.185436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.778 [2024-10-01 14:28:07.185475] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744072159994787 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.778 [2024-10-01 14:28:07.185491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.778 [2024-10-01 14:28:07.185550] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744069431361791 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.778 [2024-10-01 14:28:07.185566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.778 [2024-10-01 14:28:07.185618] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11791448174156030883 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.778 [2024-10-01 14:28:07.185632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.778 #62 NEW cov: 11890 ft: 15434 corp: 36/2521b lim: 105 exec/s: 62 rss: 70Mb L: 91/105 MS: 1 CMP- DE: "y\001\000\000"- 00:08:24.778 [2024-10-01 14:28:07.235564] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.778 [2024-10-01 14:28:07.235592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.778 [2024-10-01 14:28:07.235640] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744072159994787 len:65326 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.778 [2024-10-01 14:28:07.235655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.778 [2024-10-01 14:28:07.235710] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.779 [2024-10-01 14:28:07.235730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.779 [2024-10-01 14:28:07.235785] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.779 [2024-10-01 14:28:07.235801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.779 #63 NEW cov: 11890 ft: 15461 corp: 37/2620b lim: 105 exec/s: 63 rss: 70Mb L: 99/105 MS: 1 InsertRepeatedBytes- 00:08:24.779 [2024-10-01 14:28:07.275556] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.779 [2024-10-01 14:28:07.275584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.779 [2024-10-01 14:28:07.275621] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.779 [2024-10-01 14:28:07.275637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.779 [2024-10-01 14:28:07.275690] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11791448172598109091 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.779 [2024-10-01 14:28:07.275706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.037 #64 NEW cov: 11890 ft: 15512 corp: 38/2698b lim: 105 exec/s: 64 rss: 70Mb L: 78/105 MS: 1 ChangeBinInt- 00:08:25.037 [2024-10-01 14:28:07.325577] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.037 [2024-10-01 14:28:07.325605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.037 [2024-10-01 14:28:07.325643] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.037 [2024-10-01 14:28:07.325662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.037 #65 NEW cov: 11890 ft: 15543 corp: 39/2741b lim: 105 exec/s: 65 rss: 70Mb L: 43/105 MS: 1 EraseBytes- 00:08:25.037 [2024-10-01 14:28:07.365930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6438177142476462425 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.037 [2024-10-01 14:28:07.365959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.037 [2024-10-01 14:28:07.366007] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.037 [2024-10-01 14:28:07.366023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.037 [2024-10-01 14:28:07.366076] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.037 [2024-10-01 14:28:07.366091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.037 [2024-10-01 14:28:07.366144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:6438275382588823897 len:22874 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.037 [2024-10-01 14:28:07.366159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:25.037 #66 NEW cov: 11890 ft: 15558 corp: 40/2845b lim: 105 exec/s: 66 rss: 70Mb L: 104/105 MS: 1 ChangeBinInt- 00:08:25.037 [2024-10-01 14:28:07.416167] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:11791269351643128739 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.037 [2024-10-01 14:28:07.416195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.037 [2024-10-01 14:28:07.416234] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1446803456761533460 len:5141 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.037 [2024-10-01 14:28:07.416250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.037 [2024-10-01 14:28:07.416303] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:1446803456761533460 len:5141 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.037 [2024-10-01 14:28:07.416320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.037 [2024-10-01 14:28:07.416372] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11791448170197947555 len:44196 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.037 [2024-10-01 14:28:07.416387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:25.037 #72 NEW cov: 11890 ft: 15562 corp: 41/2949b lim: 105 exec/s: 36 rss: 70Mb L: 104/105 MS: 1 ChangeBinInt- 00:08:25.037 #72 DONE cov: 11890 ft: 15562 corp: 41/2949b lim: 105 exec/s: 36 rss: 70Mb 00:08:25.037 ###### Recommended dictionary. ###### 00:08:25.037 "\001\000\000\001" # Uses: 4 00:08:25.037 "y\001\000\000" # Uses: 0 00:08:25.037 ###### End of recommended dictionary. ###### 00:08:25.037 Done 72 runs in 2 second(s) 00:08:25.295 14:28:07 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_16.conf 00:08:25.295 14:28:07 -- ../common.sh@72 -- # (( i++ )) 00:08:25.295 14:28:07 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:25.295 14:28:07 -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:08:25.295 14:28:07 -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:08:25.295 14:28:07 -- nvmf/run.sh@24 -- # local timen=1 00:08:25.295 14:28:07 -- nvmf/run.sh@25 -- # local core=0x1 00:08:25.295 14:28:07 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:25.295 14:28:07 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:08:25.295 14:28:07 -- nvmf/run.sh@29 -- # printf %02d 17 00:08:25.295 14:28:07 -- nvmf/run.sh@29 -- # port=4417 00:08:25.295 14:28:07 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:25.295 14:28:07 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:08:25.295 14:28:07 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:25.295 14:28:07 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 -r /var/tmp/spdk17.sock 00:08:25.295 [2024-10-01 14:28:07.632610] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:25.295 [2024-10-01 14:28:07.632684] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid706801 ] 00:08:25.295 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.552 [2024-10-01 14:28:07.968838] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.552 [2024-10-01 14:28:08.052524] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:25.552 [2024-10-01 14:28:08.052648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.810 [2024-10-01 14:28:08.111240] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:25.810 [2024-10-01 14:28:08.127444] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:08:25.810 INFO: Running with entropic power schedule (0xFF, 100). 00:08:25.810 INFO: Seed: 3721257712 00:08:25.810 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:25.810 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:25.810 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:25.810 INFO: A corpus is not provided, starting from an empty corpus 00:08:25.810 #2 INITED exec/s: 0 rss: 61Mb 00:08:25.810 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:25.810 This may also happen if the target rejected all inputs we tried so far 00:08:25.810 [2024-10-01 14:28:08.182588] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.810 [2024-10-01 14:28:08.182621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.068 NEW_FUNC[1/672]: 0x4545f8 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:08:26.068 NEW_FUNC[2/672]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:26.068 #7 NEW cov: 11684 ft: 11685 corp: 2/37b lim: 120 exec/s: 0 rss: 68Mb L: 36/36 MS: 5 CrossOver-CrossOver-CopyPart-ChangeByte-InsertRepeatedBytes- 00:08:26.068 [2024-10-01 14:28:08.505045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.068 [2024-10-01 14:28:08.505098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.068 #8 NEW cov: 11797 ft: 12321 corp: 3/68b lim: 120 exec/s: 0 rss: 68Mb L: 31/36 MS: 1 EraseBytes- 00:08:26.068 [2024-10-01 14:28:08.565497] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.068 [2024-10-01 14:28:08.565524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.068 [2024-10-01 14:28:08.565625] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.068 [2024-10-01 14:28:08.565644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.068 #9 NEW cov: 11803 ft: 13460 corp: 4/133b lim: 120 exec/s: 0 rss: 68Mb L: 65/65 MS: 1 CrossOver- 00:08:26.325 [2024-10-01 14:28:08.615799] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.325 [2024-10-01 14:28:08.615828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.325 [2024-10-01 14:28:08.615894] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.325 [2024-10-01 14:28:08.615913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.325 #10 NEW cov: 11888 ft: 13640 corp: 5/182b lim: 120 exec/s: 0 rss: 68Mb L: 49/65 MS: 1 CopyPart- 00:08:26.325 [2024-10-01 14:28:08.665513] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.325 [2024-10-01 14:28:08.665541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.325 #11 NEW cov: 11888 ft: 13772 corp: 6/218b lim: 120 exec/s: 0 rss: 68Mb L: 36/65 MS: 1 CopyPart- 00:08:26.325 [2024-10-01 14:28:08.715680] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:36028797186736128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.325 [2024-10-01 14:28:08.715706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.325 #12 NEW cov: 11888 ft: 13831 corp: 7/249b lim: 120 exec/s: 0 rss: 68Mb L: 31/65 MS: 1 ChangeBit- 00:08:26.325 [2024-10-01 14:28:08.766224] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.325 [2024-10-01 14:28:08.766253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.325 [2024-10-01 14:28:08.766333] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.325 [2024-10-01 14:28:08.766350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.325 #13 NEW cov: 11888 ft: 13879 corp: 8/318b lim: 120 exec/s: 0 rss: 68Mb L: 69/69 MS: 1 CopyPart- 00:08:26.325 [2024-10-01 14:28:08.826126] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.325 [2024-10-01 14:28:08.826156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.325 #17 NEW cov: 11888 ft: 13915 corp: 9/364b lim: 120 exec/s: 0 rss: 68Mb L: 46/69 MS: 4 ChangeBit-InsertByte-ChangeByte-CrossOver- 00:08:26.607 [2024-10-01 14:28:08.876271] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.607 [2024-10-01 14:28:08.876300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.607 #18 NEW cov: 11888 ft: 13949 corp: 10/400b lim: 120 exec/s: 0 rss: 68Mb L: 36/69 MS: 1 ChangeByte- 00:08:26.607 [2024-10-01 14:28:08.926473] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.607 [2024-10-01 14:28:08.926499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.607 #19 NEW cov: 11888 ft: 13980 corp: 11/436b lim: 120 exec/s: 0 rss: 68Mb L: 36/69 MS: 1 ChangeByte- 00:08:26.607 [2024-10-01 14:28:08.976636] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:36028797186736128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.607 [2024-10-01 14:28:08.976666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.607 #20 NEW cov: 11888 ft: 14008 corp: 12/473b lim: 120 exec/s: 0 rss: 68Mb L: 37/69 MS: 1 CrossOver- 00:08:26.607 [2024-10-01 14:28:09.027229] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.607 [2024-10-01 14:28:09.027260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.607 [2024-10-01 14:28:09.027356] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.607 [2024-10-01 14:28:09.027374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.607 #21 NEW cov: 11888 ft: 14020 corp: 13/531b lim: 120 exec/s: 0 rss: 68Mb L: 58/69 MS: 1 CopyPart- 00:08:26.607 [2024-10-01 14:28:09.077178] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:36028797186759424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.607 [2024-10-01 14:28:09.077208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.607 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:26.607 #22 NEW cov: 11911 ft: 14097 corp: 14/562b lim: 120 exec/s: 0 rss: 68Mb L: 31/69 MS: 1 ChangeByte- 00:08:26.607 [2024-10-01 14:28:09.127305] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:504403158433333248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.607 [2024-10-01 14:28:09.127335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.864 #23 NEW cov: 11911 ft: 14128 corp: 15/598b lim: 120 exec/s: 0 rss: 68Mb L: 36/69 MS: 1 ChangeBinInt- 00:08:26.864 [2024-10-01 14:28:09.177575] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.864 [2024-10-01 14:28:09.177605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.864 #24 NEW cov: 11911 ft: 14146 corp: 16/624b lim: 120 exec/s: 24 rss: 68Mb L: 26/69 MS: 1 EraseBytes- 00:08:26.864 [2024-10-01 14:28:09.227736] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:36028797186736128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.864 [2024-10-01 14:28:09.227766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.864 #25 NEW cov: 11911 ft: 14243 corp: 17/655b lim: 120 exec/s: 25 rss: 68Mb L: 31/69 MS: 1 ChangeBit- 00:08:26.864 [2024-10-01 14:28:09.277874] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.864 [2024-10-01 14:28:09.277904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.864 #26 NEW cov: 11911 ft: 14249 corp: 18/691b lim: 120 exec/s: 26 rss: 68Mb L: 36/69 MS: 1 ChangeBinInt- 00:08:26.864 [2024-10-01 14:28:09.328082] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:504403158433333248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.864 [2024-10-01 14:28:09.328111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.864 #27 NEW cov: 11911 ft: 14262 corp: 19/728b lim: 120 exec/s: 27 rss: 68Mb L: 37/69 MS: 1 InsertByte- 00:08:27.121 [2024-10-01 14:28:09.389394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2117795840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.121 [2024-10-01 14:28:09.389428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.121 [2024-10-01 14:28:09.389492] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.121 [2024-10-01 14:28:09.389515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.121 [2024-10-01 14:28:09.389566] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.121 [2024-10-01 14:28:09.389584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.121 [2024-10-01 14:28:09.389678] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.121 [2024-10-01 14:28:09.389697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:27.121 #32 NEW cov: 11911 ft: 14712 corp: 20/834b lim: 120 exec/s: 32 rss: 68Mb L: 106/106 MS: 5 ChangeBit-ChangeByte-ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:08:27.121 [2024-10-01 14:28:09.438580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:504403158433333248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.121 [2024-10-01 14:28:09.438609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.121 #33 NEW cov: 11911 ft: 14760 corp: 21/871b lim: 120 exec/s: 33 rss: 68Mb L: 37/106 MS: 1 ShuffleBytes- 00:08:27.121 [2024-10-01 14:28:09.488796] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.121 [2024-10-01 14:28:09.488824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.121 #34 NEW cov: 11911 ft: 14803 corp: 22/917b lim: 120 exec/s: 34 rss: 68Mb L: 46/106 MS: 1 ShuffleBytes- 00:08:27.121 [2024-10-01 14:28:09.539049] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.121 [2024-10-01 14:28:09.539078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.121 #35 NEW cov: 11911 ft: 14823 corp: 23/953b lim: 120 exec/s: 35 rss: 68Mb L: 36/106 MS: 1 ShuffleBytes- 00:08:27.121 [2024-10-01 14:28:09.589250] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:170524672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.121 [2024-10-01 14:28:09.589279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.121 #36 NEW cov: 11911 ft: 14893 corp: 24/989b lim: 120 exec/s: 36 rss: 68Mb L: 36/106 MS: 1 ChangeByte- 00:08:27.121 [2024-10-01 14:28:09.639577] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.121 [2024-10-01 14:28:09.639606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.377 #37 NEW cov: 11911 ft: 14969 corp: 25/1025b lim: 120 exec/s: 37 rss: 69Mb L: 36/106 MS: 1 ChangeBinInt- 00:08:27.377 [2024-10-01 14:28:09.690091] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:504403158433333248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.377 [2024-10-01 14:28:09.690120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.377 [2024-10-01 14:28:09.690182] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.377 [2024-10-01 14:28:09.690204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.377 #38 NEW cov: 11911 ft: 14996 corp: 26/1089b lim: 120 exec/s: 38 rss: 69Mb L: 64/106 MS: 1 CopyPart- 00:08:27.377 [2024-10-01 14:28:09.749945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:36028797186759424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.377 [2024-10-01 14:28:09.749976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.377 #39 NEW cov: 11911 ft: 15009 corp: 27/1120b lim: 120 exec/s: 39 rss: 69Mb L: 31/106 MS: 1 ShuffleBytes- 00:08:27.377 [2024-10-01 14:28:09.800050] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.377 [2024-10-01 14:28:09.800080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.377 #40 NEW cov: 11911 ft: 15029 corp: 28/1149b lim: 120 exec/s: 40 rss: 69Mb L: 29/106 MS: 1 EraseBytes- 00:08:27.377 [2024-10-01 14:28:09.850208] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.377 [2024-10-01 14:28:09.850236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.377 #41 NEW cov: 11911 ft: 15037 corp: 29/1185b lim: 120 exec/s: 41 rss: 69Mb L: 36/106 MS: 1 ChangeByte- 00:08:27.377 [2024-10-01 14:28:09.901091] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:504403158433333248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.377 [2024-10-01 14:28:09.901122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.378 [2024-10-01 14:28:09.901210] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.378 [2024-10-01 14:28:09.901231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.634 #42 NEW cov: 11911 ft: 15054 corp: 30/1249b lim: 120 exec/s: 42 rss: 69Mb L: 64/106 MS: 1 ChangeBit- 00:08:27.634 [2024-10-01 14:28:09.960905] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:36028797186736128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.634 [2024-10-01 14:28:09.960935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.634 #43 NEW cov: 11911 ft: 15061 corp: 31/1280b lim: 120 exec/s: 43 rss: 69Mb L: 31/106 MS: 1 CopyPart- 00:08:27.634 [2024-10-01 14:28:10.011405] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.634 [2024-10-01 14:28:10.011434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.634 #44 NEW cov: 11911 ft: 15117 corp: 32/1310b lim: 120 exec/s: 44 rss: 69Mb L: 30/106 MS: 1 EraseBytes- 00:08:27.634 [2024-10-01 14:28:10.071948] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.634 [2024-10-01 14:28:10.071985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.634 [2024-10-01 14:28:10.072082] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.634 [2024-10-01 14:28:10.072100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.634 #45 NEW cov: 11911 ft: 15124 corp: 33/1368b lim: 120 exec/s: 45 rss: 69Mb L: 58/106 MS: 1 ChangeByte- 00:08:27.634 [2024-10-01 14:28:10.131943] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.634 [2024-10-01 14:28:10.131974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.634 #46 NEW cov: 11911 ft: 15135 corp: 34/1404b lim: 120 exec/s: 46 rss: 69Mb L: 36/106 MS: 1 CMP- DE: "\177$\371\324\377\340!\000"- 00:08:27.891 [2024-10-01 14:28:10.182096] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.891 [2024-10-01 14:28:10.182123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.891 #47 NEW cov: 11911 ft: 15150 corp: 35/1440b lim: 120 exec/s: 23 rss: 69Mb L: 36/106 MS: 1 ChangeBit- 00:08:27.891 #47 DONE cov: 11911 ft: 15150 corp: 35/1440b lim: 120 exec/s: 23 rss: 69Mb 00:08:27.891 ###### Recommended dictionary. ###### 00:08:27.891 "\177$\371\324\377\340!\000" # Uses: 0 00:08:27.891 ###### End of recommended dictionary. ###### 00:08:27.891 Done 47 runs in 2 second(s) 00:08:27.891 14:28:10 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_17.conf 00:08:27.891 14:28:10 -- ../common.sh@72 -- # (( i++ )) 00:08:27.891 14:28:10 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:27.891 14:28:10 -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:08:27.891 14:28:10 -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:08:27.891 14:28:10 -- nvmf/run.sh@24 -- # local timen=1 00:08:27.891 14:28:10 -- nvmf/run.sh@25 -- # local core=0x1 00:08:27.891 14:28:10 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:27.891 14:28:10 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:08:27.891 14:28:10 -- nvmf/run.sh@29 -- # printf %02d 18 00:08:27.891 14:28:10 -- nvmf/run.sh@29 -- # port=4418 00:08:27.891 14:28:10 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:27.891 14:28:10 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:08:27.891 14:28:10 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:27.891 14:28:10 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 -r /var/tmp/spdk18.sock 00:08:27.891 [2024-10-01 14:28:10.371469] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:27.891 [2024-10-01 14:28:10.371551] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid707189 ] 00:08:27.891 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.148 [2024-10-01 14:28:10.659260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.405 [2024-10-01 14:28:10.740985] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:28.405 [2024-10-01 14:28:10.741140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.405 [2024-10-01 14:28:10.799768] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.405 [2024-10-01 14:28:10.815981] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:08:28.405 INFO: Running with entropic power schedule (0xFF, 100). 00:08:28.405 INFO: Seed: 2115300602 00:08:28.405 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:28.405 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:28.405 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:28.405 INFO: A corpus is not provided, starting from an empty corpus 00:08:28.405 #2 INITED exec/s: 0 rss: 61Mb 00:08:28.405 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:28.405 This may also happen if the target rejected all inputs we tried so far 00:08:28.405 [2024-10-01 14:28:10.893079] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:28.405 [2024-10-01 14:28:10.893131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.405 [2024-10-01 14:28:10.893257] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:28.405 [2024-10-01 14:28:10.893292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.405 [2024-10-01 14:28:10.893424] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:28.405 [2024-10-01 14:28:10.893446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.966 NEW_FUNC[1/670]: 0x457e58 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:08:28.966 NEW_FUNC[2/670]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:28.966 #16 NEW cov: 11624 ft: 11623 corp: 2/70b lim: 100 exec/s: 0 rss: 68Mb L: 69/69 MS: 4 CrossOver-InsertByte-InsertByte-InsertRepeatedBytes- 00:08:28.966 [2024-10-01 14:28:11.213993] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:28.966 [2024-10-01 14:28:11.214037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.966 [2024-10-01 14:28:11.214096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:28.966 [2024-10-01 14:28:11.214114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.966 [2024-10-01 14:28:11.214200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:28.966 [2024-10-01 14:28:11.214217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.966 #17 NEW cov: 11741 ft: 12089 corp: 3/139b lim: 100 exec/s: 0 rss: 68Mb L: 69/69 MS: 1 ChangeBinInt- 00:08:28.966 [2024-10-01 14:28:11.274092] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:28.966 [2024-10-01 14:28:11.274119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.966 [2024-10-01 14:28:11.274184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:28.966 [2024-10-01 14:28:11.274202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.966 [2024-10-01 14:28:11.274267] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:28.966 [2024-10-01 14:28:11.274286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.966 #23 NEW cov: 11747 ft: 12373 corp: 4/209b lim: 100 exec/s: 0 rss: 68Mb L: 70/70 MS: 1 InsertRepeatedBytes- 00:08:28.966 [2024-10-01 14:28:11.324331] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:28.966 [2024-10-01 14:28:11.324362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.966 [2024-10-01 14:28:11.324435] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:28.966 [2024-10-01 14:28:11.324451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.966 [2024-10-01 14:28:11.324536] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:28.966 [2024-10-01 14:28:11.324552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.966 #24 NEW cov: 11832 ft: 12660 corp: 5/279b lim: 100 exec/s: 0 rss: 68Mb L: 70/70 MS: 1 CrossOver- 00:08:28.966 [2024-10-01 14:28:11.385038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:28.966 [2024-10-01 14:28:11.385069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.967 [2024-10-01 14:28:11.385131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:28.967 [2024-10-01 14:28:11.385146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.967 [2024-10-01 14:28:11.385228] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:28.967 [2024-10-01 14:28:11.385247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.967 [2024-10-01 14:28:11.385299] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:28.967 [2024-10-01 14:28:11.385317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:28.967 #25 NEW cov: 11832 ft: 13057 corp: 6/360b lim: 100 exec/s: 0 rss: 68Mb L: 81/81 MS: 1 InsertRepeatedBytes- 00:08:28.967 [2024-10-01 14:28:11.444973] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:28.967 [2024-10-01 14:28:11.445003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.967 [2024-10-01 14:28:11.445064] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:28.967 [2024-10-01 14:28:11.445080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.967 [2024-10-01 14:28:11.445170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:28.967 [2024-10-01 14:28:11.445185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.967 #28 NEW cov: 11832 ft: 13233 corp: 7/431b lim: 100 exec/s: 0 rss: 68Mb L: 71/81 MS: 3 InsertByte-CopyPart-CrossOver- 00:08:29.224 [2024-10-01 14:28:11.495258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.224 [2024-10-01 14:28:11.495286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.224 [2024-10-01 14:28:11.495362] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:29.224 [2024-10-01 14:28:11.495379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.224 [2024-10-01 14:28:11.495434] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:29.224 [2024-10-01 14:28:11.495451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.224 #29 NEW cov: 11832 ft: 13333 corp: 8/508b lim: 100 exec/s: 0 rss: 69Mb L: 77/81 MS: 1 CopyPart- 00:08:29.224 [2024-10-01 14:28:11.555407] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.224 [2024-10-01 14:28:11.555434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.224 [2024-10-01 14:28:11.555502] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:29.224 [2024-10-01 14:28:11.555517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.224 [2024-10-01 14:28:11.555595] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:29.224 [2024-10-01 14:28:11.555612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.224 #30 NEW cov: 11832 ft: 13414 corp: 9/586b lim: 100 exec/s: 0 rss: 69Mb L: 78/81 MS: 1 InsertByte- 00:08:29.224 [2024-10-01 14:28:11.615296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.224 [2024-10-01 14:28:11.615322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.224 [2024-10-01 14:28:11.615375] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:29.224 [2024-10-01 14:28:11.615393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.224 #31 NEW cov: 11832 ft: 13723 corp: 10/627b lim: 100 exec/s: 0 rss: 69Mb L: 41/81 MS: 1 CrossOver- 00:08:29.224 [2024-10-01 14:28:11.665859] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.224 [2024-10-01 14:28:11.665884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.224 [2024-10-01 14:28:11.665945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:29.224 [2024-10-01 14:28:11.665963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.224 [2024-10-01 14:28:11.666033] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:29.224 [2024-10-01 14:28:11.666050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.224 #32 NEW cov: 11832 ft: 13790 corp: 11/698b lim: 100 exec/s: 0 rss: 69Mb L: 71/81 MS: 1 CopyPart- 00:08:29.224 [2024-10-01 14:28:11.716003] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.224 [2024-10-01 14:28:11.716030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.224 [2024-10-01 14:28:11.716113] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:29.224 [2024-10-01 14:28:11.716131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.224 [2024-10-01 14:28:11.716208] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:29.224 [2024-10-01 14:28:11.716228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.224 #33 NEW cov: 11832 ft: 13819 corp: 12/775b lim: 100 exec/s: 0 rss: 69Mb L: 77/81 MS: 1 ChangeByte- 00:08:29.481 [2024-10-01 14:28:11.765764] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.481 [2024-10-01 14:28:11.765794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.481 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:29.481 #39 NEW cov: 11855 ft: 14184 corp: 13/805b lim: 100 exec/s: 0 rss: 69Mb L: 30/81 MS: 1 CrossOver- 00:08:29.481 [2024-10-01 14:28:11.816528] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.481 [2024-10-01 14:28:11.816556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.481 [2024-10-01 14:28:11.816632] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:29.481 [2024-10-01 14:28:11.816650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.481 [2024-10-01 14:28:11.816735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:29.481 [2024-10-01 14:28:11.816755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.481 #40 NEW cov: 11855 ft: 14236 corp: 14/883b lim: 100 exec/s: 0 rss: 69Mb L: 78/81 MS: 1 ChangeBinInt- 00:08:29.481 [2024-10-01 14:28:11.876580] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.481 [2024-10-01 14:28:11.876610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.481 [2024-10-01 14:28:11.876710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:29.481 [2024-10-01 14:28:11.876735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.481 #41 NEW cov: 11855 ft: 14261 corp: 15/942b lim: 100 exec/s: 41 rss: 69Mb L: 59/81 MS: 1 EraseBytes- 00:08:29.481 [2024-10-01 14:28:11.936836] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.481 [2024-10-01 14:28:11.936863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.481 [2024-10-01 14:28:11.936922] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:29.481 [2024-10-01 14:28:11.936938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.481 #43 NEW cov: 11855 ft: 14290 corp: 16/984b lim: 100 exec/s: 43 rss: 69Mb L: 42/81 MS: 2 ChangeByte-CrossOver- 00:08:29.481 [2024-10-01 14:28:11.986856] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.481 [2024-10-01 14:28:11.986884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.738 #44 NEW cov: 11855 ft: 14348 corp: 17/1017b lim: 100 exec/s: 44 rss: 69Mb L: 33/81 MS: 1 CrossOver- 00:08:29.738 [2024-10-01 14:28:12.047393] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.738 [2024-10-01 14:28:12.047420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.738 [2024-10-01 14:28:12.047491] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:29.738 [2024-10-01 14:28:12.047506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.738 #45 NEW cov: 11855 ft: 14357 corp: 18/1060b lim: 100 exec/s: 45 rss: 69Mb L: 43/81 MS: 1 EraseBytes- 00:08:29.738 [2024-10-01 14:28:12.107758] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.738 [2024-10-01 14:28:12.107785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.738 [2024-10-01 14:28:12.107856] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:29.739 [2024-10-01 14:28:12.107873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.739 [2024-10-01 14:28:12.107957] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:29.739 [2024-10-01 14:28:12.107975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.739 #46 NEW cov: 11855 ft: 14431 corp: 19/1138b lim: 100 exec/s: 46 rss: 69Mb L: 78/81 MS: 1 ShuffleBytes- 00:08:29.739 [2024-10-01 14:28:12.168362] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.739 [2024-10-01 14:28:12.168392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.739 [2024-10-01 14:28:12.168455] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:29.739 [2024-10-01 14:28:12.168470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.739 [2024-10-01 14:28:12.168554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:29.739 [2024-10-01 14:28:12.168573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.739 [2024-10-01 14:28:12.168663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:29.739 [2024-10-01 14:28:12.168682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:29.739 #47 NEW cov: 11855 ft: 14513 corp: 20/1228b lim: 100 exec/s: 47 rss: 69Mb L: 90/90 MS: 1 CrossOver- 00:08:29.739 [2024-10-01 14:28:12.218075] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.739 [2024-10-01 14:28:12.218100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.739 [2024-10-01 14:28:12.218199] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:29.739 [2024-10-01 14:28:12.218213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.739 #48 NEW cov: 11855 ft: 14527 corp: 21/1282b lim: 100 exec/s: 48 rss: 69Mb L: 54/90 MS: 1 EraseBytes- 00:08:29.996 [2024-10-01 14:28:12.268516] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.996 [2024-10-01 14:28:12.268544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.996 [2024-10-01 14:28:12.268616] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:29.996 [2024-10-01 14:28:12.268633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.996 [2024-10-01 14:28:12.268705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:29.996 [2024-10-01 14:28:12.268727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.996 #49 NEW cov: 11855 ft: 14561 corp: 22/1351b lim: 100 exec/s: 49 rss: 69Mb L: 69/90 MS: 1 ChangeByte- 00:08:29.996 [2024-10-01 14:28:12.318748] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.996 [2024-10-01 14:28:12.318775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.996 [2024-10-01 14:28:12.318847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:29.996 [2024-10-01 14:28:12.318864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.996 [2024-10-01 14:28:12.318933] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:29.996 [2024-10-01 14:28:12.318950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.996 #50 NEW cov: 11855 ft: 14572 corp: 23/1428b lim: 100 exec/s: 50 rss: 69Mb L: 77/90 MS: 1 ShuffleBytes- 00:08:29.996 [2024-10-01 14:28:12.368903] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.996 [2024-10-01 14:28:12.368931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.996 [2024-10-01 14:28:12.369003] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:29.996 [2024-10-01 14:28:12.369020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.996 [2024-10-01 14:28:12.369079] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:29.996 [2024-10-01 14:28:12.369095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.996 #51 NEW cov: 11855 ft: 14575 corp: 24/1506b lim: 100 exec/s: 51 rss: 69Mb L: 78/90 MS: 1 ChangeBinInt- 00:08:29.996 [2024-10-01 14:28:12.418904] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.996 [2024-10-01 14:28:12.418932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.996 [2024-10-01 14:28:12.419029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:29.996 [2024-10-01 14:28:12.419046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.996 #52 NEW cov: 11855 ft: 14581 corp: 25/1565b lim: 100 exec/s: 52 rss: 69Mb L: 59/90 MS: 1 ChangeBinInt- 00:08:29.996 [2024-10-01 14:28:12.479722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.996 [2024-10-01 14:28:12.479748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.996 [2024-10-01 14:28:12.479826] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:29.996 [2024-10-01 14:28:12.479844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.996 [2024-10-01 14:28:12.479929] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:29.996 [2024-10-01 14:28:12.479948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.996 [2024-10-01 14:28:12.480033] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:29.996 [2024-10-01 14:28:12.480052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:29.996 #53 NEW cov: 11855 ft: 14643 corp: 26/1661b lim: 100 exec/s: 53 rss: 69Mb L: 96/96 MS: 1 InsertRepeatedBytes- 00:08:30.254 [2024-10-01 14:28:12.530156] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:30.254 [2024-10-01 14:28:12.530183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.254 [2024-10-01 14:28:12.530268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:30.254 [2024-10-01 14:28:12.530285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.254 [2024-10-01 14:28:12.530368] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:30.254 [2024-10-01 14:28:12.530386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.254 [2024-10-01 14:28:12.530471] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:30.254 [2024-10-01 14:28:12.530488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:30.254 #54 NEW cov: 11855 ft: 14748 corp: 27/1755b lim: 100 exec/s: 54 rss: 69Mb L: 94/96 MS: 1 InsertRepeatedBytes- 00:08:30.254 [2024-10-01 14:28:12.589883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:30.254 [2024-10-01 14:28:12.589910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.254 [2024-10-01 14:28:12.589970] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:30.254 [2024-10-01 14:28:12.589989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.254 [2024-10-01 14:28:12.590058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:30.254 [2024-10-01 14:28:12.590078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.254 #60 NEW cov: 11855 ft: 14763 corp: 28/1832b lim: 100 exec/s: 60 rss: 69Mb L: 77/96 MS: 1 ShuffleBytes- 00:08:30.254 [2024-10-01 14:28:12.640058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:30.254 [2024-10-01 14:28:12.640083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.254 [2024-10-01 14:28:12.640149] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:30.254 [2024-10-01 14:28:12.640162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.254 [2024-10-01 14:28:12.640231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:30.254 [2024-10-01 14:28:12.640247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.254 #61 NEW cov: 11855 ft: 14776 corp: 29/1909b lim: 100 exec/s: 61 rss: 69Mb L: 77/96 MS: 1 ChangeBinInt- 00:08:30.254 [2024-10-01 14:28:12.690340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:30.254 [2024-10-01 14:28:12.690366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.254 [2024-10-01 14:28:12.690451] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:30.254 [2024-10-01 14:28:12.690468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.254 [2024-10-01 14:28:12.690555] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:30.254 [2024-10-01 14:28:12.690573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.255 #62 NEW cov: 11855 ft: 14797 corp: 30/1986b lim: 100 exec/s: 62 rss: 69Mb L: 77/96 MS: 1 CMP- DE: "\001\002\000\000"- 00:08:30.255 [2024-10-01 14:28:12.740576] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:30.255 [2024-10-01 14:28:12.740603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.255 [2024-10-01 14:28:12.740688] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:30.255 [2024-10-01 14:28:12.740707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.255 [2024-10-01 14:28:12.740761] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:30.255 [2024-10-01 14:28:12.740774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.255 #63 NEW cov: 11855 ft: 14812 corp: 31/2055b lim: 100 exec/s: 63 rss: 69Mb L: 69/96 MS: 1 CrossOver- 00:08:30.513 [2024-10-01 14:28:12.800582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:30.513 [2024-10-01 14:28:12.800610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.513 [2024-10-01 14:28:12.800690] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:30.513 [2024-10-01 14:28:12.800707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.513 #64 NEW cov: 11855 ft: 14821 corp: 32/2097b lim: 100 exec/s: 64 rss: 70Mb L: 42/96 MS: 1 ChangeBinInt- 00:08:30.513 [2024-10-01 14:28:12.861072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:30.513 [2024-10-01 14:28:12.861101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.513 [2024-10-01 14:28:12.861163] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:30.513 [2024-10-01 14:28:12.861183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.513 [2024-10-01 14:28:12.861251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:30.513 [2024-10-01 14:28:12.861267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.513 #65 NEW cov: 11855 ft: 14830 corp: 33/2166b lim: 100 exec/s: 32 rss: 70Mb L: 69/96 MS: 1 ChangeBinInt- 00:08:30.513 #65 DONE cov: 11855 ft: 14830 corp: 33/2166b lim: 100 exec/s: 32 rss: 70Mb 00:08:30.513 ###### Recommended dictionary. ###### 00:08:30.513 "\001\002\000\000" # Uses: 0 00:08:30.513 ###### End of recommended dictionary. ###### 00:08:30.513 Done 65 runs in 2 second(s) 00:08:30.513 14:28:13 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_18.conf 00:08:30.513 14:28:13 -- ../common.sh@72 -- # (( i++ )) 00:08:30.513 14:28:13 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:30.513 14:28:13 -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:08:30.513 14:28:13 -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:08:30.513 14:28:13 -- nvmf/run.sh@24 -- # local timen=1 00:08:30.513 14:28:13 -- nvmf/run.sh@25 -- # local core=0x1 00:08:30.513 14:28:13 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:30.513 14:28:13 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:08:30.513 14:28:13 -- nvmf/run.sh@29 -- # printf %02d 19 00:08:30.513 14:28:13 -- nvmf/run.sh@29 -- # port=4419 00:08:30.513 14:28:13 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:30.513 14:28:13 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:08:30.513 14:28:13 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:30.513 14:28:13 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 -r /var/tmp/spdk19.sock 00:08:30.771 [2024-10-01 14:28:13.048904] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:30.771 [2024-10-01 14:28:13.048958] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid707562 ] 00:08:30.771 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.771 [2024-10-01 14:28:13.250864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.029 [2024-10-01 14:28:13.325365] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:31.029 [2024-10-01 14:28:13.325496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.029 [2024-10-01 14:28:13.384133] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.029 [2024-10-01 14:28:13.400351] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:08:31.029 INFO: Running with entropic power schedule (0xFF, 100). 00:08:31.029 INFO: Seed: 404333962 00:08:31.029 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:31.029 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:31.029 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:31.029 INFO: A corpus is not provided, starting from an empty corpus 00:08:31.029 #2 INITED exec/s: 0 rss: 61Mb 00:08:31.029 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:31.029 This may also happen if the target rejected all inputs we tried so far 00:08:31.029 [2024-10-01 14:28:13.455469] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272304540782952448 len:36495 00:08:31.029 [2024-10-01 14:28:13.455505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.287 NEW_FUNC[1/670]: 0x45ae18 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:08:31.287 NEW_FUNC[2/670]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:31.287 #17 NEW cov: 11606 ft: 11604 corp: 2/18b lim: 50 exec/s: 0 rss: 68Mb L: 17/17 MS: 5 CrossOver-CopyPart-InsertByte-CMP-InsertRepeatedBytes- DE: "\000\000\000\000"- 00:08:31.287 [2024-10-01 14:28:13.766195] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:40126187397840896 len:36495 00:08:31.287 [2024-10-01 14:28:13.766230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.287 #20 NEW cov: 11719 ft: 12064 corp: 3/37b lim: 50 exec/s: 0 rss: 68Mb L: 19/19 MS: 3 InsertByte-CMP-CrossOver- DE: "\012\000"- 00:08:31.287 [2024-10-01 14:28:13.806267] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272304540782952448 len:36495 00:08:31.287 [2024-10-01 14:28:13.806296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.544 #21 NEW cov: 11725 ft: 12324 corp: 4/55b lim: 50 exec/s: 0 rss: 68Mb L: 18/19 MS: 1 InsertByte- 00:08:31.544 [2024-10-01 14:28:13.846346] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272304540782952448 len:36495 00:08:31.544 [2024-10-01 14:28:13.846373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.544 #22 NEW cov: 11810 ft: 12691 corp: 5/73b lim: 50 exec/s: 0 rss: 68Mb L: 18/19 MS: 1 PersAutoDict- DE: "\012\000"- 00:08:31.544 [2024-10-01 14:28:13.886515] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272304540782952448 len:36353 00:08:31.544 [2024-10-01 14:28:13.886543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.544 #23 NEW cov: 11810 ft: 12808 corp: 6/91b lim: 50 exec/s: 0 rss: 68Mb L: 18/19 MS: 1 CopyPart- 00:08:31.544 [2024-10-01 14:28:13.926595] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272304540783017728 len:36353 00:08:31.544 [2024-10-01 14:28:13.926623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.544 #24 NEW cov: 11810 ft: 12930 corp: 7/109b lim: 50 exec/s: 0 rss: 69Mb L: 18/19 MS: 1 ChangeBinInt- 00:08:31.544 [2024-10-01 14:28:13.966693] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272304540782952448 len:36495 00:08:31.544 [2024-10-01 14:28:13.966724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.544 #25 NEW cov: 11810 ft: 12974 corp: 8/127b lim: 50 exec/s: 0 rss: 69Mb L: 18/19 MS: 1 CrossOver- 00:08:31.544 [2024-10-01 14:28:14.006930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2594230126070333440 len:36495 00:08:31.544 [2024-10-01 14:28:14.006957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.544 [2024-10-01 14:28:14.006990] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2814795108486798 len:2812 00:08:31.544 [2024-10-01 14:28:14.007005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.545 #26 NEW cov: 11810 ft: 13381 corp: 9/147b lim: 50 exec/s: 0 rss: 69Mb L: 20/20 MS: 1 InsertByte- 00:08:31.545 [2024-10-01 14:28:14.046919] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:156740704308878 len:36495 00:08:31.545 [2024-10-01 14:28:14.046948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.545 #27 NEW cov: 11810 ft: 13479 corp: 10/165b lim: 50 exec/s: 0 rss: 69Mb L: 18/20 MS: 1 ShuffleBytes- 00:08:31.802 [2024-10-01 14:28:14.087093] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272304540782952448 len:36495 00:08:31.802 [2024-10-01 14:28:14.087120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.802 #28 NEW cov: 11810 ft: 13536 corp: 11/182b lim: 50 exec/s: 0 rss: 69Mb L: 17/20 MS: 1 ChangeBinInt- 00:08:31.802 [2024-10-01 14:28:14.127295] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:36495 00:08:31.802 [2024-10-01 14:28:14.127321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.802 [2024-10-01 14:28:14.127368] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:720575942770986638 len:2571 00:08:31.802 [2024-10-01 14:28:14.127383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.802 #34 NEW cov: 11810 ft: 13576 corp: 12/204b lim: 50 exec/s: 0 rss: 69Mb L: 22/22 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:08:31.802 [2024-10-01 14:28:14.167276] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272295744689930240 len:36495 00:08:31.802 [2024-10-01 14:28:14.167303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.802 #35 NEW cov: 11810 ft: 13585 corp: 13/222b lim: 50 exec/s: 0 rss: 69Mb L: 18/22 MS: 1 ChangeBit- 00:08:31.802 [2024-10-01 14:28:14.197365] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3834029160418063669 len:13622 00:08:31.802 [2024-10-01 14:28:14.197391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.802 #37 NEW cov: 11810 ft: 13642 corp: 14/237b lim: 50 exec/s: 0 rss: 69Mb L: 15/22 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:08:31.802 [2024-10-01 14:28:14.237576] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:36594 00:08:31.802 [2024-10-01 14:28:14.237602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.802 [2024-10-01 14:28:14.237635] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:720575942770986638 len:2571 00:08:31.802 [2024-10-01 14:28:14.237651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.802 #38 NEW cov: 11810 ft: 13670 corp: 15/259b lim: 50 exec/s: 0 rss: 69Mb L: 22/22 MS: 1 ChangeByte- 00:08:31.802 [2024-10-01 14:28:14.277621] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3832340310557799733 len:13622 00:08:31.802 [2024-10-01 14:28:14.277648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.802 #39 NEW cov: 11810 ft: 13710 corp: 16/274b lim: 50 exec/s: 0 rss: 69Mb L: 15/22 MS: 1 ChangeBinInt- 00:08:31.802 [2024-10-01 14:28:14.317701] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272295744689930240 len:36495 00:08:31.802 [2024-10-01 14:28:14.317731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.060 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:32.060 #40 NEW cov: 11833 ft: 13743 corp: 17/292b lim: 50 exec/s: 0 rss: 69Mb L: 18/22 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:08:32.060 [2024-10-01 14:28:14.357818] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3834029160418063669 len:13622 00:08:32.060 [2024-10-01 14:28:14.357845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.060 #41 NEW cov: 11833 ft: 13760 corp: 18/307b lim: 50 exec/s: 0 rss: 69Mb L: 15/22 MS: 1 ShuffleBytes- 00:08:32.060 [2024-10-01 14:28:14.398049] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272304540782952448 len:36353 00:08:32.060 [2024-10-01 14:28:14.398077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.060 [2024-10-01 14:28:14.398124] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10252726026694987406 len:1 00:08:32.060 [2024-10-01 14:28:14.398139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.060 #42 NEW cov: 11833 ft: 13769 corp: 19/333b lim: 50 exec/s: 0 rss: 69Mb L: 26/26 MS: 1 CMP- DE: "I\000\000\000\000\000\000\000"- 00:08:32.060 [2024-10-01 14:28:14.438060] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272304540782952448 len:36495 00:08:32.060 [2024-10-01 14:28:14.438089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.060 #43 NEW cov: 11833 ft: 13775 corp: 20/351b lim: 50 exec/s: 43 rss: 69Mb L: 18/26 MS: 1 CopyPart- 00:08:32.060 [2024-10-01 14:28:14.478162] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:720732681083551744 len:36495 00:08:32.060 [2024-10-01 14:28:14.478190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.060 #44 NEW cov: 11833 ft: 13854 corp: 21/369b lim: 50 exec/s: 44 rss: 69Mb L: 18/26 MS: 1 PersAutoDict- DE: "\012\000"- 00:08:32.060 [2024-10-01 14:28:14.518295] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272304540782952448 len:36445 00:08:32.060 [2024-10-01 14:28:14.518322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.060 #45 NEW cov: 11833 ft: 13927 corp: 22/388b lim: 50 exec/s: 45 rss: 69Mb L: 19/26 MS: 1 InsertByte- 00:08:32.060 [2024-10-01 14:28:14.558438] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272295744689930240 len:36495 00:08:32.060 [2024-10-01 14:28:14.558467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.318 #46 NEW cov: 11833 ft: 13947 corp: 23/401b lim: 50 exec/s: 46 rss: 69Mb L: 13/26 MS: 1 EraseBytes- 00:08:32.318 [2024-10-01 14:28:14.598494] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10128189352707096576 len:36495 00:08:32.318 [2024-10-01 14:28:14.598522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.318 #47 NEW cov: 11833 ft: 13955 corp: 24/419b lim: 50 exec/s: 47 rss: 69Mb L: 18/26 MS: 1 ChangeBit- 00:08:32.318 [2024-10-01 14:28:14.628594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:40126187397208832 len:36495 00:08:32.318 [2024-10-01 14:28:14.628621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.318 #48 NEW cov: 11833 ft: 13990 corp: 25/438b lim: 50 exec/s: 48 rss: 69Mb L: 19/26 MS: 1 InsertByte- 00:08:32.318 [2024-10-01 14:28:14.668689] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2850372354876375040 len:36495 00:08:32.318 [2024-10-01 14:28:14.668716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.318 #49 NEW cov: 11833 ft: 14053 corp: 26/456b lim: 50 exec/s: 49 rss: 69Mb L: 18/26 MS: 1 ChangeByte- 00:08:32.318 [2024-10-01 14:28:14.709023] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272295744689930240 len:36495 00:08:32.318 [2024-10-01 14:28:14.709050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.318 [2024-10-01 14:28:14.709082] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744071806255359 len:65536 00:08:32.318 [2024-10-01 14:28:14.709097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.318 [2024-10-01 14:28:14.709147] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:08:32.318 [2024-10-01 14:28:14.709161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.318 #50 NEW cov: 11833 ft: 14327 corp: 27/493b lim: 50 exec/s: 50 rss: 69Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:08:32.318 [2024-10-01 14:28:14.748933] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272304540782952448 len:36495 00:08:32.318 [2024-10-01 14:28:14.748959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.318 #51 NEW cov: 11833 ft: 14331 corp: 28/510b lim: 50 exec/s: 51 rss: 69Mb L: 17/37 MS: 1 ShuffleBytes- 00:08:32.318 [2024-10-01 14:28:14.789148] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10234993103320645774 len:11 00:08:32.318 [2024-10-01 14:28:14.789174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.318 [2024-10-01 14:28:14.789209] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:720575942770986638 len:2571 00:08:32.318 [2024-10-01 14:28:14.789225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.318 #52 NEW cov: 11833 ft: 14405 corp: 29/532b lim: 50 exec/s: 52 rss: 69Mb L: 22/37 MS: 1 CopyPart- 00:08:32.318 [2024-10-01 14:28:14.829381] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:40126187397208832 len:36495 00:08:32.318 [2024-10-01 14:28:14.829407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.318 [2024-10-01 14:28:14.829441] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2970925759630848 len:64267 00:08:32.318 [2024-10-01 14:28:14.829456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.318 [2024-10-01 14:28:14.829507] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10272304540615180430 len:1 00:08:32.318 [2024-10-01 14:28:14.829522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.576 #53 NEW cov: 11833 ft: 14424 corp: 30/569b lim: 50 exec/s: 53 rss: 69Mb L: 37/37 MS: 1 CrossOver- 00:08:32.576 [2024-10-01 14:28:14.869301] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10252726026862759566 len:1 00:08:32.576 [2024-10-01 14:28:14.869328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.576 #54 NEW cov: 11833 ft: 14432 corp: 31/585b lim: 50 exec/s: 54 rss: 69Mb L: 16/37 MS: 1 EraseBytes- 00:08:32.576 [2024-10-01 14:28:14.909445] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10252726026862759566 len:1 00:08:32.576 [2024-10-01 14:28:14.909474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.576 #55 NEW cov: 11833 ft: 14440 corp: 32/601b lim: 50 exec/s: 55 rss: 69Mb L: 16/37 MS: 1 ShuffleBytes- 00:08:32.576 [2024-10-01 14:28:14.949526] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272304540782952448 len:36495 00:08:32.576 [2024-10-01 14:28:14.949557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.576 #56 NEW cov: 11833 ft: 14449 corp: 33/618b lim: 50 exec/s: 56 rss: 69Mb L: 17/37 MS: 1 ChangeBinInt- 00:08:32.576 [2024-10-01 14:28:14.989634] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272304540782952448 len:36445 00:08:32.576 [2024-10-01 14:28:14.989660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.576 #57 NEW cov: 11833 ft: 14461 corp: 34/637b lim: 50 exec/s: 57 rss: 70Mb L: 19/37 MS: 1 ChangeBinInt- 00:08:32.576 [2024-10-01 14:28:15.029862] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10232178963438895104 len:36495 00:08:32.576 [2024-10-01 14:28:15.029888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.576 [2024-10-01 14:28:15.029939] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:40126155256823808 len:36495 00:08:32.576 [2024-10-01 14:28:15.029955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.576 #58 NEW cov: 11833 ft: 14471 corp: 35/661b lim: 50 exec/s: 58 rss: 70Mb L: 24/37 MS: 1 CrossOver- 00:08:32.576 [2024-10-01 14:28:15.069980] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10234993103320645774 len:11 00:08:32.576 [2024-10-01 14:28:15.070007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.576 [2024-10-01 14:28:15.070038] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:720575942770986670 len:2571 00:08:32.576 [2024-10-01 14:28:15.070054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.576 #59 NEW cov: 11833 ft: 14478 corp: 36/683b lim: 50 exec/s: 59 rss: 70Mb L: 22/37 MS: 1 ChangeBit- 00:08:32.834 [2024-10-01 14:28:15.109999] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272303930897596416 len:4957 00:08:32.834 [2024-10-01 14:28:15.110026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.834 #60 NEW cov: 11833 ft: 14495 corp: 37/702b lim: 50 exec/s: 60 rss: 70Mb L: 19/37 MS: 1 ChangeBinInt- 00:08:32.834 [2024-10-01 14:28:15.150245] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:720732681083551744 len:36353 00:08:32.834 [2024-10-01 14:28:15.150272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.834 [2024-10-01 14:28:15.150312] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10235149276753690624 len:1 00:08:32.834 [2024-10-01 14:28:15.150326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.834 #61 NEW cov: 11833 ft: 14497 corp: 38/727b lim: 50 exec/s: 61 rss: 70Mb L: 25/37 MS: 1 CrossOver- 00:08:32.834 [2024-10-01 14:28:15.190537] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:39969446860708608 len:1 00:08:32.834 [2024-10-01 14:28:15.190563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.834 [2024-10-01 14:28:15.190601] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:32.834 [2024-10-01 14:28:15.190616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.834 [2024-10-01 14:28:15.190666] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:32.834 [2024-10-01 14:28:15.190681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.834 [2024-10-01 14:28:15.190734] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:10272304540615216782 len:36353 00:08:32.834 [2024-10-01 14:28:15.190750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.834 #62 NEW cov: 11833 ft: 14745 corp: 39/772b lim: 50 exec/s: 62 rss: 70Mb L: 45/45 MS: 1 InsertRepeatedBytes- 00:08:32.834 [2024-10-01 14:28:15.230428] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:36594 00:08:32.834 [2024-10-01 14:28:15.230454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.835 [2024-10-01 14:28:15.230497] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10234993105544546958 len:11 00:08:32.835 [2024-10-01 14:28:15.230512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.835 #63 NEW cov: 11833 ft: 14755 corp: 40/795b lim: 50 exec/s: 63 rss: 70Mb L: 23/45 MS: 1 CrossOver- 00:08:32.835 [2024-10-01 14:28:15.270648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:32.835 [2024-10-01 14:28:15.270674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.835 [2024-10-01 14:28:15.270723] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2814749767106560 len:143 00:08:32.835 [2024-10-01 14:28:15.270738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.835 [2024-10-01 14:28:15.270789] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10272303933121531534 len:2571 00:08:32.835 [2024-10-01 14:28:15.270803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.835 #64 NEW cov: 11833 ft: 14774 corp: 41/827b lim: 50 exec/s: 64 rss: 70Mb L: 32/45 MS: 1 InsertRepeatedBytes- 00:08:32.835 [2024-10-01 14:28:15.310546] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:40126187397208832 len:35983 00:08:32.835 [2024-10-01 14:28:15.310573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.835 #65 NEW cov: 11833 ft: 14778 corp: 42/846b lim: 50 exec/s: 65 rss: 70Mb L: 19/45 MS: 1 ChangeBinInt- 00:08:32.835 [2024-10-01 14:28:15.350789] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:156740704308878 len:36495 00:08:32.835 [2024-10-01 14:28:15.350815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.835 [2024-10-01 14:28:15.350850] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:741123613846929408 len:1 00:08:32.835 [2024-10-01 14:28:15.350865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.093 #66 NEW cov: 11833 ft: 14795 corp: 43/872b lim: 50 exec/s: 66 rss: 70Mb L: 26/45 MS: 1 PersAutoDict- DE: "I\000\000\000\000\000\000\000"- 00:08:33.093 [2024-10-01 14:28:15.390901] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272304540782952448 len:36495 00:08:33.094 [2024-10-01 14:28:15.390928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.094 [2024-10-01 14:28:15.390962] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11607393304832 len:2812 00:08:33.094 [2024-10-01 14:28:15.390978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.094 #67 NEW cov: 11833 ft: 14801 corp: 44/892b lim: 50 exec/s: 67 rss: 70Mb L: 20/45 MS: 1 CMP- DE: "\001\000"- 00:08:33.094 [2024-10-01 14:28:15.430908] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:2571 00:08:33.094 [2024-10-01 14:28:15.430934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.094 #69 NEW cov: 11833 ft: 14814 corp: 45/902b lim: 50 exec/s: 34 rss: 70Mb L: 10/45 MS: 2 CopyPart-CMP- DE: "\000\000\000\000\000\000\000\000"- 00:08:33.094 #69 DONE cov: 11833 ft: 14814 corp: 45/902b lim: 50 exec/s: 34 rss: 70Mb 00:08:33.094 ###### Recommended dictionary. ###### 00:08:33.094 "\000\000\000\000" # Uses: 2 00:08:33.094 "\012\000" # Uses: 2 00:08:33.094 "I\000\000\000\000\000\000\000" # Uses: 1 00:08:33.094 "\001\000" # Uses: 0 00:08:33.094 "\000\000\000\000\000\000\000\000" # Uses: 0 00:08:33.094 ###### End of recommended dictionary. ###### 00:08:33.094 Done 69 runs in 2 second(s) 00:08:33.094 14:28:15 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_19.conf 00:08:33.094 14:28:15 -- ../common.sh@72 -- # (( i++ )) 00:08:33.094 14:28:15 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:33.094 14:28:15 -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:08:33.094 14:28:15 -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:08:33.094 14:28:15 -- nvmf/run.sh@24 -- # local timen=1 00:08:33.094 14:28:15 -- nvmf/run.sh@25 -- # local core=0x1 00:08:33.094 14:28:15 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:33.094 14:28:15 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:08:33.094 14:28:15 -- nvmf/run.sh@29 -- # printf %02d 20 00:08:33.094 14:28:15 -- nvmf/run.sh@29 -- # port=4420 00:08:33.094 14:28:15 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:33.094 14:28:15 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:08:33.094 14:28:15 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:33.094 14:28:15 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 -r /var/tmp/spdk20.sock 00:08:33.352 [2024-10-01 14:28:15.631048] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:33.352 [2024-10-01 14:28:15.631124] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid707916 ] 00:08:33.352 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.610 [2024-10-01 14:28:15.942953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.610 [2024-10-01 14:28:16.023738] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:33.610 [2024-10-01 14:28:16.023874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.610 [2024-10-01 14:28:16.082504] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.610 [2024-10-01 14:28:16.098728] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:33.610 INFO: Running with entropic power schedule (0xFF, 100). 00:08:33.610 INFO: Seed: 3101333395 00:08:33.610 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:33.610 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:33.610 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:33.610 INFO: A corpus is not provided, starting from an empty corpus 00:08:33.610 #2 INITED exec/s: 0 rss: 61Mb 00:08:33.610 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:33.610 This may also happen if the target rejected all inputs we tried so far 00:08:33.868 [2024-10-01 14:28:16.143951] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:33.868 [2024-10-01 14:28:16.143983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.126 NEW_FUNC[1/672]: 0x45c9d8 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:08:34.126 NEW_FUNC[2/672]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:34.126 #4 NEW cov: 11664 ft: 11665 corp: 2/29b lim: 90 exec/s: 0 rss: 68Mb L: 28/28 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:08:34.126 [2024-10-01 14:28:16.464834] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.126 [2024-10-01 14:28:16.464895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.126 #15 NEW cov: 11777 ft: 12340 corp: 3/57b lim: 90 exec/s: 0 rss: 68Mb L: 28/28 MS: 1 ChangeBit- 00:08:34.126 [2024-10-01 14:28:16.514729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.126 [2024-10-01 14:28:16.514757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.126 #16 NEW cov: 11783 ft: 12559 corp: 4/85b lim: 90 exec/s: 0 rss: 69Mb L: 28/28 MS: 1 ChangeBit- 00:08:34.126 [2024-10-01 14:28:16.554792] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.126 [2024-10-01 14:28:16.554821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.126 #17 NEW cov: 11868 ft: 12918 corp: 5/113b lim: 90 exec/s: 0 rss: 69Mb L: 28/28 MS: 1 CopyPart- 00:08:34.126 [2024-10-01 14:28:16.594908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.126 [2024-10-01 14:28:16.594934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.126 #18 NEW cov: 11868 ft: 12982 corp: 6/141b lim: 90 exec/s: 0 rss: 69Mb L: 28/28 MS: 1 ChangeBit- 00:08:34.126 [2024-10-01 14:28:16.635032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.126 [2024-10-01 14:28:16.635057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.384 #19 NEW cov: 11868 ft: 13086 corp: 7/169b lim: 90 exec/s: 0 rss: 69Mb L: 28/28 MS: 1 ChangeByte- 00:08:34.384 [2024-10-01 14:28:16.675163] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.384 [2024-10-01 14:28:16.675190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.384 #20 NEW cov: 11868 ft: 13151 corp: 8/197b lim: 90 exec/s: 0 rss: 69Mb L: 28/28 MS: 1 ChangeBit- 00:08:34.384 [2024-10-01 14:28:16.715269] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.384 [2024-10-01 14:28:16.715297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.384 #26 NEW cov: 11868 ft: 13202 corp: 9/225b lim: 90 exec/s: 0 rss: 69Mb L: 28/28 MS: 1 ShuffleBytes- 00:08:34.384 [2024-10-01 14:28:16.755520] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.384 [2024-10-01 14:28:16.755549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.384 [2024-10-01 14:28:16.755601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:34.384 [2024-10-01 14:28:16.755617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.384 #27 NEW cov: 11868 ft: 14048 corp: 10/261b lim: 90 exec/s: 0 rss: 69Mb L: 36/36 MS: 1 InsertRepeatedBytes- 00:08:34.384 [2024-10-01 14:28:16.795780] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.384 [2024-10-01 14:28:16.795806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.384 [2024-10-01 14:28:16.795844] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:34.384 [2024-10-01 14:28:16.795859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.384 [2024-10-01 14:28:16.795911] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:34.384 [2024-10-01 14:28:16.795926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.384 #28 NEW cov: 11868 ft: 14414 corp: 11/316b lim: 90 exec/s: 0 rss: 69Mb L: 55/55 MS: 1 InsertRepeatedBytes- 00:08:34.384 [2024-10-01 14:28:16.835623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.384 [2024-10-01 14:28:16.835650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.384 #29 NEW cov: 11868 ft: 14432 corp: 12/344b lim: 90 exec/s: 0 rss: 69Mb L: 28/55 MS: 1 ChangeBit- 00:08:34.384 [2024-10-01 14:28:16.875868] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.384 [2024-10-01 14:28:16.875895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.384 [2024-10-01 14:28:16.875944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:34.384 [2024-10-01 14:28:16.875960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.384 #30 NEW cov: 11868 ft: 14529 corp: 13/396b lim: 90 exec/s: 0 rss: 69Mb L: 52/55 MS: 1 CopyPart- 00:08:34.642 [2024-10-01 14:28:16.926158] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.642 [2024-10-01 14:28:16.926185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.642 [2024-10-01 14:28:16.926224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:34.642 [2024-10-01 14:28:16.926239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.642 [2024-10-01 14:28:16.926290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:34.642 [2024-10-01 14:28:16.926305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.642 #31 NEW cov: 11868 ft: 14602 corp: 14/461b lim: 90 exec/s: 0 rss: 69Mb L: 65/65 MS: 1 InsertRepeatedBytes- 00:08:34.642 [2024-10-01 14:28:16.976025] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.642 [2024-10-01 14:28:16.976051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.642 #32 NEW cov: 11868 ft: 14646 corp: 15/490b lim: 90 exec/s: 0 rss: 69Mb L: 29/65 MS: 1 InsertByte- 00:08:34.642 [2024-10-01 14:28:17.016380] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.642 [2024-10-01 14:28:17.016408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.642 [2024-10-01 14:28:17.016445] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:34.642 [2024-10-01 14:28:17.016460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.642 [2024-10-01 14:28:17.016512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:34.642 [2024-10-01 14:28:17.016526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.642 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:34.642 #33 NEW cov: 11891 ft: 14762 corp: 16/555b lim: 90 exec/s: 0 rss: 69Mb L: 65/65 MS: 1 ShuffleBytes- 00:08:34.642 [2024-10-01 14:28:17.066358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.642 [2024-10-01 14:28:17.066384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.642 [2024-10-01 14:28:17.066435] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:34.642 [2024-10-01 14:28:17.066450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.642 #34 NEW cov: 11891 ft: 14793 corp: 17/595b lim: 90 exec/s: 0 rss: 69Mb L: 40/65 MS: 1 CrossOver- 00:08:34.642 [2024-10-01 14:28:17.106371] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.642 [2024-10-01 14:28:17.106398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.642 #36 NEW cov: 11891 ft: 14841 corp: 18/621b lim: 90 exec/s: 0 rss: 69Mb L: 26/65 MS: 2 CrossOver-CrossOver- 00:08:34.643 [2024-10-01 14:28:17.146487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.643 [2024-10-01 14:28:17.146516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.900 #37 NEW cov: 11891 ft: 14852 corp: 19/650b lim: 90 exec/s: 37 rss: 69Mb L: 29/65 MS: 1 CrossOver- 00:08:34.900 [2024-10-01 14:28:17.186596] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.900 [2024-10-01 14:28:17.186624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.900 #38 NEW cov: 11891 ft: 14866 corp: 20/678b lim: 90 exec/s: 38 rss: 69Mb L: 28/65 MS: 1 ChangeBinInt- 00:08:34.901 [2024-10-01 14:28:17.226699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.901 [2024-10-01 14:28:17.226730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.901 #44 NEW cov: 11891 ft: 14868 corp: 21/708b lim: 90 exec/s: 44 rss: 69Mb L: 30/65 MS: 1 CopyPart- 00:08:34.901 [2024-10-01 14:28:17.267258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.901 [2024-10-01 14:28:17.267285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.901 [2024-10-01 14:28:17.267339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:34.901 [2024-10-01 14:28:17.267356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.901 [2024-10-01 14:28:17.267408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:34.901 [2024-10-01 14:28:17.267426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.901 [2024-10-01 14:28:17.267480] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:34.901 [2024-10-01 14:28:17.267495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.901 #45 NEW cov: 11891 ft: 15229 corp: 22/796b lim: 90 exec/s: 45 rss: 69Mb L: 88/88 MS: 1 InsertRepeatedBytes- 00:08:34.901 [2024-10-01 14:28:17.306932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.901 [2024-10-01 14:28:17.306959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.901 #46 NEW cov: 11891 ft: 15282 corp: 23/824b lim: 90 exec/s: 46 rss: 69Mb L: 28/88 MS: 1 ChangeBit- 00:08:34.901 [2024-10-01 14:28:17.347171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.901 [2024-10-01 14:28:17.347197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.901 [2024-10-01 14:28:17.347244] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:34.901 [2024-10-01 14:28:17.347259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.901 #47 NEW cov: 11891 ft: 15285 corp: 24/862b lim: 90 exec/s: 47 rss: 69Mb L: 38/88 MS: 1 EraseBytes- 00:08:34.901 [2024-10-01 14:28:17.397221] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.901 [2024-10-01 14:28:17.397248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.901 #48 NEW cov: 11891 ft: 15347 corp: 25/880b lim: 90 exec/s: 48 rss: 69Mb L: 18/88 MS: 1 EraseBytes- 00:08:35.159 [2024-10-01 14:28:17.437344] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.159 [2024-10-01 14:28:17.437372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.159 #49 NEW cov: 11891 ft: 15355 corp: 26/908b lim: 90 exec/s: 49 rss: 70Mb L: 28/88 MS: 1 CMP- DE: "\001\000\000\000"- 00:08:35.159 [2024-10-01 14:28:17.477854] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.159 [2024-10-01 14:28:17.477881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.159 [2024-10-01 14:28:17.477926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:35.159 [2024-10-01 14:28:17.477942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.159 [2024-10-01 14:28:17.478010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:35.159 [2024-10-01 14:28:17.478027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.159 [2024-10-01 14:28:17.478081] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:35.159 [2024-10-01 14:28:17.478096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:35.159 #50 NEW cov: 11891 ft: 15363 corp: 27/997b lim: 90 exec/s: 50 rss: 70Mb L: 89/89 MS: 1 CrossOver- 00:08:35.159 [2024-10-01 14:28:17.527585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.159 [2024-10-01 14:28:17.527611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.159 #51 NEW cov: 11891 ft: 15366 corp: 28/1017b lim: 90 exec/s: 51 rss: 70Mb L: 20/89 MS: 1 CrossOver- 00:08:35.159 [2024-10-01 14:28:17.557962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.159 [2024-10-01 14:28:17.557988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.159 [2024-10-01 14:28:17.558025] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:35.159 [2024-10-01 14:28:17.558040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.159 [2024-10-01 14:28:17.558091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:35.159 [2024-10-01 14:28:17.558107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.159 #52 NEW cov: 11891 ft: 15372 corp: 29/1077b lim: 90 exec/s: 52 rss: 70Mb L: 60/89 MS: 1 InsertRepeatedBytes- 00:08:35.159 [2024-10-01 14:28:17.597761] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.159 [2024-10-01 14:28:17.597788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.159 #53 NEW cov: 11891 ft: 15385 corp: 30/1107b lim: 90 exec/s: 53 rss: 70Mb L: 30/89 MS: 1 CMP- DE: "\000\000\002\000"- 00:08:35.159 [2024-10-01 14:28:17.637913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.159 [2024-10-01 14:28:17.637939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.159 #54 NEW cov: 11891 ft: 15449 corp: 31/1135b lim: 90 exec/s: 54 rss: 70Mb L: 28/89 MS: 1 CrossOver- 00:08:35.159 [2024-10-01 14:28:17.678314] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.159 [2024-10-01 14:28:17.678341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.159 [2024-10-01 14:28:17.678378] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:35.159 [2024-10-01 14:28:17.678393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.159 [2024-10-01 14:28:17.678445] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:35.159 [2024-10-01 14:28:17.678460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.418 #55 NEW cov: 11891 ft: 15457 corp: 32/1198b lim: 90 exec/s: 55 rss: 70Mb L: 63/89 MS: 1 InsertRepeatedBytes- 00:08:35.418 [2024-10-01 14:28:17.718155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.418 [2024-10-01 14:28:17.718182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.418 #56 NEW cov: 11891 ft: 15509 corp: 33/1230b lim: 90 exec/s: 56 rss: 70Mb L: 32/89 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:08:35.418 [2024-10-01 14:28:17.758514] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.418 [2024-10-01 14:28:17.758541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.418 [2024-10-01 14:28:17.758596] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:35.418 [2024-10-01 14:28:17.758611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.418 [2024-10-01 14:28:17.758664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:35.418 [2024-10-01 14:28:17.758679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.418 #57 NEW cov: 11891 ft: 15548 corp: 34/1287b lim: 90 exec/s: 57 rss: 70Mb L: 57/89 MS: 1 CrossOver- 00:08:35.418 [2024-10-01 14:28:17.798357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.418 [2024-10-01 14:28:17.798384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.418 #58 NEW cov: 11891 ft: 15562 corp: 35/1316b lim: 90 exec/s: 58 rss: 70Mb L: 29/89 MS: 1 InsertByte- 00:08:35.418 [2024-10-01 14:28:17.838784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.418 [2024-10-01 14:28:17.838811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.418 [2024-10-01 14:28:17.838847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:35.418 [2024-10-01 14:28:17.838862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.418 [2024-10-01 14:28:17.838917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:35.418 [2024-10-01 14:28:17.838932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.418 #59 NEW cov: 11891 ft: 15569 corp: 36/1371b lim: 90 exec/s: 59 rss: 70Mb L: 55/89 MS: 1 PersAutoDict- DE: "\000\000\002\000"- 00:08:35.418 [2024-10-01 14:28:17.888944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.418 [2024-10-01 14:28:17.888970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.418 [2024-10-01 14:28:17.889008] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:35.418 [2024-10-01 14:28:17.889023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.418 [2024-10-01 14:28:17.889077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:35.418 [2024-10-01 14:28:17.889092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.418 #60 NEW cov: 11891 ft: 15581 corp: 37/1431b lim: 90 exec/s: 60 rss: 70Mb L: 60/89 MS: 1 PersAutoDict- DE: "\000\000\002\000"- 00:08:35.418 [2024-10-01 14:28:17.938841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.418 [2024-10-01 14:28:17.938867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.676 #61 NEW cov: 11891 ft: 15617 corp: 38/1459b lim: 90 exec/s: 61 rss: 70Mb L: 28/89 MS: 1 ChangeByte- 00:08:35.676 [2024-10-01 14:28:17.978924] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.676 [2024-10-01 14:28:17.978951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.676 #62 NEW cov: 11891 ft: 15686 corp: 39/1488b lim: 90 exec/s: 62 rss: 70Mb L: 29/89 MS: 1 InsertByte- 00:08:35.676 [2024-10-01 14:28:18.019136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.676 [2024-10-01 14:28:18.019163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.676 [2024-10-01 14:28:18.019213] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:35.676 [2024-10-01 14:28:18.019229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.676 #63 NEW cov: 11891 ft: 15699 corp: 40/1540b lim: 90 exec/s: 63 rss: 70Mb L: 52/89 MS: 1 CopyPart- 00:08:35.676 [2024-10-01 14:28:18.069457] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.676 [2024-10-01 14:28:18.069485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.676 [2024-10-01 14:28:18.069522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:35.676 [2024-10-01 14:28:18.069538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.676 [2024-10-01 14:28:18.069588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:35.676 [2024-10-01 14:28:18.069604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.676 #64 NEW cov: 11891 ft: 15705 corp: 41/1609b lim: 90 exec/s: 64 rss: 70Mb L: 69/89 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:08:35.676 [2024-10-01 14:28:18.109405] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.676 [2024-10-01 14:28:18.109432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.676 [2024-10-01 14:28:18.109500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:35.676 [2024-10-01 14:28:18.109516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.676 #65 NEW cov: 11891 ft: 15707 corp: 42/1662b lim: 90 exec/s: 32 rss: 70Mb L: 53/89 MS: 1 InsertByte- 00:08:35.676 #65 DONE cov: 11891 ft: 15707 corp: 42/1662b lim: 90 exec/s: 32 rss: 70Mb 00:08:35.676 ###### Recommended dictionary. ###### 00:08:35.676 "\001\000\000\000" # Uses: 2 00:08:35.676 "\000\000\002\000" # Uses: 2 00:08:35.676 ###### End of recommended dictionary. ###### 00:08:35.676 Done 65 runs in 2 second(s) 00:08:35.935 14:28:18 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_20.conf 00:08:35.935 14:28:18 -- ../common.sh@72 -- # (( i++ )) 00:08:35.935 14:28:18 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:35.935 14:28:18 -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:08:35.935 14:28:18 -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:08:35.935 14:28:18 -- nvmf/run.sh@24 -- # local timen=1 00:08:35.935 14:28:18 -- nvmf/run.sh@25 -- # local core=0x1 00:08:35.935 14:28:18 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:35.935 14:28:18 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:08:35.935 14:28:18 -- nvmf/run.sh@29 -- # printf %02d 21 00:08:35.935 14:28:18 -- nvmf/run.sh@29 -- # port=4421 00:08:35.935 14:28:18 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:35.935 14:28:18 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:08:35.935 14:28:18 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:35.935 14:28:18 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 -r /var/tmp/spdk21.sock 00:08:35.935 [2024-10-01 14:28:18.319189] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:35.935 [2024-10-01 14:28:18.319273] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid708284 ] 00:08:35.935 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.193 [2024-10-01 14:28:18.622217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.193 [2024-10-01 14:28:18.704681] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:36.193 [2024-10-01 14:28:18.704847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.452 [2024-10-01 14:28:18.763773] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.452 [2024-10-01 14:28:18.779987] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:08:36.452 INFO: Running with entropic power schedule (0xFF, 100). 00:08:36.452 INFO: Seed: 1487368101 00:08:36.452 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:36.452 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:36.452 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:36.452 INFO: A corpus is not provided, starting from an empty corpus 00:08:36.452 #2 INITED exec/s: 0 rss: 61Mb 00:08:36.452 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:36.452 This may also happen if the target rejected all inputs we tried so far 00:08:36.452 [2024-10-01 14:28:18.828945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:36.452 [2024-10-01 14:28:18.828979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.709 NEW_FUNC[1/672]: 0x45fc08 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:08:36.709 NEW_FUNC[2/672]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:36.709 #4 NEW cov: 11634 ft: 11640 corp: 2/11b lim: 50 exec/s: 0 rss: 68Mb L: 10/10 MS: 2 InsertByte-CMP- DE: "\000\000\000\000\000\000\000;"- 00:08:36.709 [2024-10-01 14:28:19.129685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:36.709 [2024-10-01 14:28:19.129727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.709 #6 NEW cov: 11752 ft: 12107 corp: 3/22b lim: 50 exec/s: 0 rss: 68Mb L: 11/11 MS: 2 ChangeByte-CrossOver- 00:08:36.709 [2024-10-01 14:28:19.169761] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:36.709 [2024-10-01 14:28:19.169790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.709 #7 NEW cov: 11758 ft: 12426 corp: 4/32b lim: 50 exec/s: 0 rss: 68Mb L: 10/11 MS: 1 ShuffleBytes- 00:08:36.709 [2024-10-01 14:28:19.209864] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:36.709 [2024-10-01 14:28:19.209891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.966 #8 NEW cov: 11843 ft: 12792 corp: 5/48b lim: 50 exec/s: 0 rss: 69Mb L: 16/16 MS: 1 CrossOver- 00:08:36.966 [2024-10-01 14:28:19.250138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:36.966 [2024-10-01 14:28:19.250165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.966 [2024-10-01 14:28:19.250224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:36.967 [2024-10-01 14:28:19.250239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:36.967 #9 NEW cov: 11843 ft: 13715 corp: 6/71b lim: 50 exec/s: 0 rss: 69Mb L: 23/23 MS: 1 InsertRepeatedBytes- 00:08:36.967 [2024-10-01 14:28:19.290061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:36.967 [2024-10-01 14:28:19.290088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.967 #10 NEW cov: 11843 ft: 13787 corp: 7/87b lim: 50 exec/s: 0 rss: 69Mb L: 16/23 MS: 1 ShuffleBytes- 00:08:36.967 [2024-10-01 14:28:19.330167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:36.967 [2024-10-01 14:28:19.330197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.967 #11 NEW cov: 11843 ft: 13881 corp: 8/105b lim: 50 exec/s: 0 rss: 69Mb L: 18/23 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000;"- 00:08:36.967 [2024-10-01 14:28:19.370587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:36.967 [2024-10-01 14:28:19.370613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.967 [2024-10-01 14:28:19.370650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:36.967 [2024-10-01 14:28:19.370665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:36.967 [2024-10-01 14:28:19.370722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:36.967 [2024-10-01 14:28:19.370737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:36.967 #12 NEW cov: 11843 ft: 14252 corp: 9/135b lim: 50 exec/s: 0 rss: 69Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:08:36.967 [2024-10-01 14:28:19.410469] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:36.967 [2024-10-01 14:28:19.410495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.967 #13 NEW cov: 11843 ft: 14322 corp: 10/146b lim: 50 exec/s: 0 rss: 69Mb L: 11/30 MS: 1 InsertByte- 00:08:36.967 [2024-10-01 14:28:19.450678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:36.967 [2024-10-01 14:28:19.450705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.967 [2024-10-01 14:28:19.450762] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:36.967 [2024-10-01 14:28:19.450777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:36.967 #14 NEW cov: 11843 ft: 14350 corp: 11/167b lim: 50 exec/s: 0 rss: 69Mb L: 21/30 MS: 1 CopyPart- 00:08:37.225 [2024-10-01 14:28:19.500953] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.225 [2024-10-01 14:28:19.500981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.225 [2024-10-01 14:28:19.501019] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:37.225 [2024-10-01 14:28:19.501035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.225 [2024-10-01 14:28:19.501089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:37.225 [2024-10-01 14:28:19.501105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.225 #15 NEW cov: 11843 ft: 14376 corp: 12/198b lim: 50 exec/s: 0 rss: 69Mb L: 31/31 MS: 1 InsertByte- 00:08:37.225 [2024-10-01 14:28:19.550833] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.225 [2024-10-01 14:28:19.550860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.225 #16 NEW cov: 11843 ft: 14451 corp: 13/214b lim: 50 exec/s: 0 rss: 69Mb L: 16/31 MS: 1 ShuffleBytes- 00:08:37.225 [2024-10-01 14:28:19.590958] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.225 [2024-10-01 14:28:19.590985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.225 #17 NEW cov: 11843 ft: 14471 corp: 14/233b lim: 50 exec/s: 0 rss: 69Mb L: 19/31 MS: 1 InsertByte- 00:08:37.225 [2024-10-01 14:28:19.631082] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.225 [2024-10-01 14:28:19.631108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.225 #18 NEW cov: 11843 ft: 14486 corp: 15/244b lim: 50 exec/s: 0 rss: 69Mb L: 11/31 MS: 1 ChangeBit- 00:08:37.225 [2024-10-01 14:28:19.671169] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.225 [2024-10-01 14:28:19.671195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.225 #19 NEW cov: 11843 ft: 14504 corp: 16/261b lim: 50 exec/s: 0 rss: 69Mb L: 17/31 MS: 1 InsertByte- 00:08:37.225 [2024-10-01 14:28:19.711309] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.225 [2024-10-01 14:28:19.711336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.225 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:37.225 #20 NEW cov: 11866 ft: 14537 corp: 17/272b lim: 50 exec/s: 0 rss: 69Mb L: 11/31 MS: 1 InsertByte- 00:08:37.483 [2024-10-01 14:28:19.751452] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.483 [2024-10-01 14:28:19.751479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.483 #21 NEW cov: 11866 ft: 14624 corp: 18/290b lim: 50 exec/s: 0 rss: 69Mb L: 18/31 MS: 1 ChangeByte- 00:08:37.483 [2024-10-01 14:28:19.791843] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.483 [2024-10-01 14:28:19.791872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.483 [2024-10-01 14:28:19.791913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:37.483 [2024-10-01 14:28:19.791929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.483 [2024-10-01 14:28:19.791986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:37.483 [2024-10-01 14:28:19.792002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.483 #22 NEW cov: 11866 ft: 14690 corp: 19/325b lim: 50 exec/s: 22 rss: 69Mb L: 35/35 MS: 1 CopyPart- 00:08:37.483 [2024-10-01 14:28:19.841855] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.483 [2024-10-01 14:28:19.841883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.483 [2024-10-01 14:28:19.841931] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:37.483 [2024-10-01 14:28:19.841947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.483 #23 NEW cov: 11866 ft: 14721 corp: 20/351b lim: 50 exec/s: 23 rss: 69Mb L: 26/35 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000;"- 00:08:37.483 [2024-10-01 14:28:19.881794] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.483 [2024-10-01 14:28:19.881827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.483 #24 NEW cov: 11866 ft: 14768 corp: 21/369b lim: 50 exec/s: 24 rss: 69Mb L: 18/35 MS: 1 InsertByte- 00:08:37.483 [2024-10-01 14:28:19.921958] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.483 [2024-10-01 14:28:19.921987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.483 #25 NEW cov: 11866 ft: 14843 corp: 22/379b lim: 50 exec/s: 25 rss: 69Mb L: 10/35 MS: 1 ChangeBinInt- 00:08:37.483 [2024-10-01 14:28:19.962161] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.483 [2024-10-01 14:28:19.962188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.483 [2024-10-01 14:28:19.962251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:37.483 [2024-10-01 14:28:19.962267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.483 #26 NEW cov: 11866 ft: 14859 corp: 23/401b lim: 50 exec/s: 26 rss: 69Mb L: 22/35 MS: 1 CopyPart- 00:08:37.483 [2024-10-01 14:28:20.002360] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.483 [2024-10-01 14:28:20.002389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.483 [2024-10-01 14:28:20.002444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:37.483 [2024-10-01 14:28:20.002462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.741 #27 NEW cov: 11866 ft: 14892 corp: 24/427b lim: 50 exec/s: 27 rss: 69Mb L: 26/35 MS: 1 CrossOver- 00:08:37.741 [2024-10-01 14:28:20.042432] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.741 [2024-10-01 14:28:20.042469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.741 [2024-10-01 14:28:20.042526] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:37.741 [2024-10-01 14:28:20.042543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.741 #28 NEW cov: 11866 ft: 14914 corp: 25/453b lim: 50 exec/s: 28 rss: 69Mb L: 26/35 MS: 1 ChangeByte- 00:08:37.741 [2024-10-01 14:28:20.092458] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.741 [2024-10-01 14:28:20.092491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.741 #29 NEW cov: 11866 ft: 14937 corp: 26/464b lim: 50 exec/s: 29 rss: 70Mb L: 11/35 MS: 1 CopyPart- 00:08:37.741 [2024-10-01 14:28:20.143002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.741 [2024-10-01 14:28:20.143029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.741 [2024-10-01 14:28:20.143076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:37.741 [2024-10-01 14:28:20.143092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.741 [2024-10-01 14:28:20.143146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:37.741 [2024-10-01 14:28:20.143161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.741 [2024-10-01 14:28:20.143215] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:37.741 [2024-10-01 14:28:20.143230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:37.741 #30 NEW cov: 11866 ft: 15275 corp: 27/505b lim: 50 exec/s: 30 rss: 70Mb L: 41/41 MS: 1 InsertRepeatedBytes- 00:08:37.741 [2024-10-01 14:28:20.192683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.741 [2024-10-01 14:28:20.192710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.741 #31 NEW cov: 11866 ft: 15288 corp: 28/517b lim: 50 exec/s: 31 rss: 70Mb L: 12/41 MS: 1 InsertByte- 00:08:37.741 [2024-10-01 14:28:20.232965] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.741 [2024-10-01 14:28:20.232992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.741 [2024-10-01 14:28:20.233034] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:37.741 [2024-10-01 14:28:20.233050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.741 #32 NEW cov: 11866 ft: 15324 corp: 29/539b lim: 50 exec/s: 32 rss: 70Mb L: 22/41 MS: 1 InsertByte- 00:08:37.999 [2024-10-01 14:28:20.272912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.999 [2024-10-01 14:28:20.272940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.999 #33 NEW cov: 11866 ft: 15327 corp: 30/549b lim: 50 exec/s: 33 rss: 70Mb L: 10/41 MS: 1 ChangeBinInt- 00:08:37.999 [2024-10-01 14:28:20.313331] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.999 [2024-10-01 14:28:20.313358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.999 [2024-10-01 14:28:20.313396] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:37.999 [2024-10-01 14:28:20.313411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.999 [2024-10-01 14:28:20.313465] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:37.999 [2024-10-01 14:28:20.313480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.999 #34 NEW cov: 11866 ft: 15405 corp: 31/580b lim: 50 exec/s: 34 rss: 70Mb L: 31/41 MS: 1 ChangeBit- 00:08:37.999 [2024-10-01 14:28:20.363322] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.999 [2024-10-01 14:28:20.363350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.999 [2024-10-01 14:28:20.363394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:37.999 [2024-10-01 14:28:20.363409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.999 #35 NEW cov: 11866 ft: 15407 corp: 32/604b lim: 50 exec/s: 35 rss: 70Mb L: 24/41 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000;"- 00:08:37.999 [2024-10-01 14:28:20.403411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.999 [2024-10-01 14:28:20.403437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.999 [2024-10-01 14:28:20.403489] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:37.999 [2024-10-01 14:28:20.403505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.999 #36 NEW cov: 11866 ft: 15412 corp: 33/633b lim: 50 exec/s: 36 rss: 70Mb L: 29/41 MS: 1 EraseBytes- 00:08:37.999 [2024-10-01 14:28:20.443690] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.999 [2024-10-01 14:28:20.443717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.999 [2024-10-01 14:28:20.443760] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:37.999 [2024-10-01 14:28:20.443779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.999 [2024-10-01 14:28:20.443835] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:37.999 [2024-10-01 14:28:20.443849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.999 #37 NEW cov: 11866 ft: 15467 corp: 34/670b lim: 50 exec/s: 37 rss: 70Mb L: 37/41 MS: 1 CrossOver- 00:08:37.999 [2024-10-01 14:28:20.493996] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.999 [2024-10-01 14:28:20.494023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.999 [2024-10-01 14:28:20.494088] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:37.999 [2024-10-01 14:28:20.494104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.999 [2024-10-01 14:28:20.494158] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:37.999 [2024-10-01 14:28:20.494174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.000 [2024-10-01 14:28:20.494227] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:38.000 [2024-10-01 14:28:20.494242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:38.257 #38 NEW cov: 11866 ft: 15483 corp: 35/714b lim: 50 exec/s: 38 rss: 70Mb L: 44/44 MS: 1 InsertRepeatedBytes- 00:08:38.257 [2024-10-01 14:28:20.543842] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:38.257 [2024-10-01 14:28:20.543870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.257 [2024-10-01 14:28:20.543912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:38.257 [2024-10-01 14:28:20.543927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.257 #39 NEW cov: 11866 ft: 15492 corp: 36/740b lim: 50 exec/s: 39 rss: 70Mb L: 26/44 MS: 1 ShuffleBytes- 00:08:38.257 [2024-10-01 14:28:20.583785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:38.257 [2024-10-01 14:28:20.583811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.257 #40 NEW cov: 11866 ft: 15509 corp: 37/751b lim: 50 exec/s: 40 rss: 70Mb L: 11/44 MS: 1 ShuffleBytes- 00:08:38.257 [2024-10-01 14:28:20.623916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:38.257 [2024-10-01 14:28:20.623943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.257 #41 NEW cov: 11866 ft: 15517 corp: 38/768b lim: 50 exec/s: 41 rss: 70Mb L: 17/44 MS: 1 EraseBytes- 00:08:38.257 [2024-10-01 14:28:20.664330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:38.257 [2024-10-01 14:28:20.664357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.257 [2024-10-01 14:28:20.664395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:38.257 [2024-10-01 14:28:20.664410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.257 [2024-10-01 14:28:20.664464] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:38.257 [2024-10-01 14:28:20.664481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.257 #42 NEW cov: 11866 ft: 15527 corp: 39/802b lim: 50 exec/s: 42 rss: 70Mb L: 34/44 MS: 1 CMP- DE: "\305\340\232\374\004\341!\000"- 00:08:38.257 [2024-10-01 14:28:20.704171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:38.257 [2024-10-01 14:28:20.704198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.257 #43 NEW cov: 11866 ft: 15536 corp: 40/816b lim: 50 exec/s: 43 rss: 70Mb L: 14/44 MS: 1 InsertRepeatedBytes- 00:08:38.257 [2024-10-01 14:28:20.744723] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:38.257 [2024-10-01 14:28:20.744750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.257 [2024-10-01 14:28:20.744813] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:38.257 [2024-10-01 14:28:20.744830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.257 [2024-10-01 14:28:20.744884] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:38.257 [2024-10-01 14:28:20.744900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.257 [2024-10-01 14:28:20.744955] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:38.257 [2024-10-01 14:28:20.744970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:38.257 #44 NEW cov: 11866 ft: 15540 corp: 41/861b lim: 50 exec/s: 44 rss: 70Mb L: 45/45 MS: 1 InsertRepeatedBytes- 00:08:38.516 [2024-10-01 14:28:20.784402] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:38.516 [2024-10-01 14:28:20.784429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.516 #45 NEW cov: 11866 ft: 15577 corp: 42/871b lim: 50 exec/s: 45 rss: 70Mb L: 10/45 MS: 1 ChangeBinInt- 00:08:38.516 [2024-10-01 14:28:20.814451] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:38.516 [2024-10-01 14:28:20.814478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.516 #46 NEW cov: 11866 ft: 15605 corp: 43/881b lim: 50 exec/s: 23 rss: 70Mb L: 10/45 MS: 1 ShuffleBytes- 00:08:38.516 #46 DONE cov: 11866 ft: 15605 corp: 43/881b lim: 50 exec/s: 23 rss: 70Mb 00:08:38.516 ###### Recommended dictionary. ###### 00:08:38.516 "\000\000\000\000\000\000\000;" # Uses: 3 00:08:38.516 "\305\340\232\374\004\341!\000" # Uses: 0 00:08:38.516 ###### End of recommended dictionary. ###### 00:08:38.516 Done 46 runs in 2 second(s) 00:08:38.516 14:28:20 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_21.conf 00:08:38.516 14:28:20 -- ../common.sh@72 -- # (( i++ )) 00:08:38.516 14:28:20 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:38.516 14:28:20 -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:08:38.516 14:28:20 -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:08:38.516 14:28:20 -- nvmf/run.sh@24 -- # local timen=1 00:08:38.516 14:28:20 -- nvmf/run.sh@25 -- # local core=0x1 00:08:38.516 14:28:20 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:38.516 14:28:20 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:08:38.516 14:28:20 -- nvmf/run.sh@29 -- # printf %02d 22 00:08:38.516 14:28:20 -- nvmf/run.sh@29 -- # port=4422 00:08:38.516 14:28:20 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:38.516 14:28:20 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:08:38.516 14:28:20 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:38.516 14:28:20 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 -r /var/tmp/spdk22.sock 00:08:38.516 [2024-10-01 14:28:21.018051] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:38.516 [2024-10-01 14:28:21.018152] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid708647 ] 00:08:38.774 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.031 [2024-10-01 14:28:21.325183] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.031 [2024-10-01 14:28:21.407583] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:39.031 [2024-10-01 14:28:21.407748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.031 [2024-10-01 14:28:21.466428] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.031 [2024-10-01 14:28:21.482624] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:08:39.031 INFO: Running with entropic power schedule (0xFF, 100). 00:08:39.031 INFO: Seed: 4192359624 00:08:39.031 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:39.031 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:39.031 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:39.031 INFO: A corpus is not provided, starting from an empty corpus 00:08:39.031 #2 INITED exec/s: 0 rss: 61Mb 00:08:39.031 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:39.031 This may also happen if the target rejected all inputs we tried so far 00:08:39.288 [2024-10-01 14:28:21.559360] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:39.288 [2024-10-01 14:28:21.559410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.546 NEW_FUNC[1/672]: 0x461ed8 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:08:39.546 NEW_FUNC[2/672]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:39.546 #3 NEW cov: 11665 ft: 11658 corp: 2/21b lim: 85 exec/s: 0 rss: 68Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:08:39.546 [2024-10-01 14:28:21.881003] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:39.546 [2024-10-01 14:28:21.881040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.546 [2024-10-01 14:28:21.881148] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:39.546 [2024-10-01 14:28:21.881169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.546 [2024-10-01 14:28:21.881262] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:39.546 [2024-10-01 14:28:21.881281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.546 #5 NEW cov: 11778 ft: 13008 corp: 3/78b lim: 85 exec/s: 0 rss: 68Mb L: 57/57 MS: 2 CrossOver-InsertRepeatedBytes- 00:08:39.546 [2024-10-01 14:28:21.941307] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:39.546 [2024-10-01 14:28:21.941337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.546 [2024-10-01 14:28:21.941408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:39.546 [2024-10-01 14:28:21.941432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.546 [2024-10-01 14:28:21.941491] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:39.546 [2024-10-01 14:28:21.941507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.546 #6 NEW cov: 11784 ft: 13219 corp: 4/138b lim: 85 exec/s: 0 rss: 68Mb L: 60/60 MS: 1 CopyPart- 00:08:39.546 [2024-10-01 14:28:22.000803] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:39.546 [2024-10-01 14:28:22.000829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.546 #7 NEW cov: 11869 ft: 13595 corp: 5/158b lim: 85 exec/s: 0 rss: 69Mb L: 20/60 MS: 1 ChangeBit- 00:08:39.546 [2024-10-01 14:28:22.061730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:39.546 [2024-10-01 14:28:22.061761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.546 [2024-10-01 14:28:22.061839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:39.546 [2024-10-01 14:28:22.061859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.546 [2024-10-01 14:28:22.061927] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:39.546 [2024-10-01 14:28:22.061948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.804 #8 NEW cov: 11869 ft: 13759 corp: 6/215b lim: 85 exec/s: 0 rss: 69Mb L: 57/60 MS: 1 CrossOver- 00:08:39.804 [2024-10-01 14:28:22.111953] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:39.804 [2024-10-01 14:28:22.111982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.804 [2024-10-01 14:28:22.112054] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:39.804 [2024-10-01 14:28:22.112074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.804 [2024-10-01 14:28:22.112150] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:39.804 [2024-10-01 14:28:22.112167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.804 #9 NEW cov: 11869 ft: 13854 corp: 7/272b lim: 85 exec/s: 0 rss: 69Mb L: 57/60 MS: 1 CopyPart- 00:08:39.804 [2024-10-01 14:28:22.171779] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:39.804 [2024-10-01 14:28:22.171808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.804 [2024-10-01 14:28:22.171875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:39.804 [2024-10-01 14:28:22.171893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.804 #13 NEW cov: 11869 ft: 14209 corp: 8/317b lim: 85 exec/s: 0 rss: 69Mb L: 45/60 MS: 4 CopyPart-ChangeByte-ChangeBit-CrossOver- 00:08:39.804 [2024-10-01 14:28:22.222355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:39.804 [2024-10-01 14:28:22.222383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.804 [2024-10-01 14:28:22.222450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:39.804 [2024-10-01 14:28:22.222471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.804 [2024-10-01 14:28:22.222546] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:39.804 [2024-10-01 14:28:22.222563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.804 #14 NEW cov: 11869 ft: 14256 corp: 9/374b lim: 85 exec/s: 0 rss: 69Mb L: 57/60 MS: 1 ChangeBinInt- 00:08:39.804 [2024-10-01 14:28:22.282236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:39.804 [2024-10-01 14:28:22.282263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.804 [2024-10-01 14:28:22.282345] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:39.804 [2024-10-01 14:28:22.282360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.804 #15 NEW cov: 11869 ft: 14308 corp: 10/418b lim: 85 exec/s: 0 rss: 69Mb L: 44/60 MS: 1 EraseBytes- 00:08:40.062 [2024-10-01 14:28:22.332181] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.062 [2024-10-01 14:28:22.332209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.062 #16 NEW cov: 11869 ft: 14380 corp: 11/439b lim: 85 exec/s: 0 rss: 69Mb L: 21/60 MS: 1 InsertByte- 00:08:40.062 [2024-10-01 14:28:22.383029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.062 [2024-10-01 14:28:22.383056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.062 [2024-10-01 14:28:22.383120] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:40.062 [2024-10-01 14:28:22.383138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.062 [2024-10-01 14:28:22.383219] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:40.062 [2024-10-01 14:28:22.383233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.062 #17 NEW cov: 11869 ft: 14402 corp: 12/496b lim: 85 exec/s: 0 rss: 69Mb L: 57/60 MS: 1 ChangeBit- 00:08:40.062 [2024-10-01 14:28:22.433315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.062 [2024-10-01 14:28:22.433342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.062 [2024-10-01 14:28:22.433426] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:40.062 [2024-10-01 14:28:22.433445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.062 [2024-10-01 14:28:22.433529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:40.062 [2024-10-01 14:28:22.433546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.062 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:40.062 #18 NEW cov: 11892 ft: 14483 corp: 13/556b lim: 85 exec/s: 0 rss: 69Mb L: 60/60 MS: 1 CMP- DE: "|\253'\345\005\341!\000"- 00:08:40.062 [2024-10-01 14:28:22.493677] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.062 [2024-10-01 14:28:22.493706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.062 [2024-10-01 14:28:22.493779] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:40.062 [2024-10-01 14:28:22.493800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.062 [2024-10-01 14:28:22.493861] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:40.062 [2024-10-01 14:28:22.493879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.062 #19 NEW cov: 11892 ft: 14506 corp: 14/617b lim: 85 exec/s: 0 rss: 69Mb L: 61/61 MS: 1 InsertByte- 00:08:40.062 [2024-10-01 14:28:22.543388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.062 [2024-10-01 14:28:22.543418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.062 [2024-10-01 14:28:22.543518] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:40.062 [2024-10-01 14:28:22.543538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.062 #20 NEW cov: 11892 ft: 14528 corp: 15/662b lim: 85 exec/s: 20 rss: 69Mb L: 45/61 MS: 1 ChangeBinInt- 00:08:40.321 [2024-10-01 14:28:22.603626] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.321 [2024-10-01 14:28:22.603661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.321 [2024-10-01 14:28:22.603781] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:40.321 [2024-10-01 14:28:22.603815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.321 #21 NEW cov: 11892 ft: 14572 corp: 16/707b lim: 85 exec/s: 21 rss: 69Mb L: 45/61 MS: 1 ShuffleBytes- 00:08:40.321 [2024-10-01 14:28:22.654199] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.321 [2024-10-01 14:28:22.654228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.321 [2024-10-01 14:28:22.654290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:40.321 [2024-10-01 14:28:22.654310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.321 [2024-10-01 14:28:22.654374] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:40.321 [2024-10-01 14:28:22.654394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.321 #22 NEW cov: 11892 ft: 14579 corp: 17/768b lim: 85 exec/s: 22 rss: 69Mb L: 61/61 MS: 1 PersAutoDict- DE: "|\253'\345\005\341!\000"- 00:08:40.321 [2024-10-01 14:28:22.714851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.321 [2024-10-01 14:28:22.714880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.321 [2024-10-01 14:28:22.714960] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:40.321 [2024-10-01 14:28:22.714978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.321 [2024-10-01 14:28:22.715063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:40.321 [2024-10-01 14:28:22.715083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.321 [2024-10-01 14:28:22.715173] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:40.321 [2024-10-01 14:28:22.715193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:40.321 #23 NEW cov: 11892 ft: 14907 corp: 18/843b lim: 85 exec/s: 23 rss: 69Mb L: 75/75 MS: 1 InsertRepeatedBytes- 00:08:40.321 [2024-10-01 14:28:22.774739] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.321 [2024-10-01 14:28:22.774769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.321 [2024-10-01 14:28:22.774875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:40.321 [2024-10-01 14:28:22.774894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.321 [2024-10-01 14:28:22.774987] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:40.321 [2024-10-01 14:28:22.775005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.321 #24 NEW cov: 11892 ft: 14923 corp: 19/904b lim: 85 exec/s: 24 rss: 69Mb L: 61/75 MS: 1 ChangeByte- 00:08:40.321 [2024-10-01 14:28:22.824168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.321 [2024-10-01 14:28:22.824197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.578 #25 NEW cov: 11892 ft: 14995 corp: 20/925b lim: 85 exec/s: 25 rss: 69Mb L: 21/75 MS: 1 ChangeByte- 00:08:40.578 [2024-10-01 14:28:22.875422] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.578 [2024-10-01 14:28:22.875451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.578 [2024-10-01 14:28:22.875523] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:40.578 [2024-10-01 14:28:22.875541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.578 [2024-10-01 14:28:22.875619] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:40.578 [2024-10-01 14:28:22.875635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.578 [2024-10-01 14:28:22.875728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:40.578 [2024-10-01 14:28:22.875746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:40.578 #26 NEW cov: 11892 ft: 15003 corp: 21/997b lim: 85 exec/s: 26 rss: 69Mb L: 72/75 MS: 1 InsertRepeatedBytes- 00:08:40.578 [2024-10-01 14:28:22.925333] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.578 [2024-10-01 14:28:22.925359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.578 [2024-10-01 14:28:22.925429] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:40.578 [2024-10-01 14:28:22.925450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.578 [2024-10-01 14:28:22.925530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:40.578 [2024-10-01 14:28:22.925545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.578 #27 NEW cov: 11892 ft: 15035 corp: 22/1054b lim: 85 exec/s: 27 rss: 69Mb L: 57/75 MS: 1 ShuffleBytes- 00:08:40.578 [2024-10-01 14:28:22.974705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.578 [2024-10-01 14:28:22.974738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.578 #29 NEW cov: 11892 ft: 15121 corp: 23/1078b lim: 85 exec/s: 29 rss: 69Mb L: 24/75 MS: 2 EraseBytes-CopyPart- 00:08:40.578 [2024-10-01 14:28:23.035681] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.578 [2024-10-01 14:28:23.035707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.578 [2024-10-01 14:28:23.035782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:40.578 [2024-10-01 14:28:23.035799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.578 [2024-10-01 14:28:23.035871] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:40.578 [2024-10-01 14:28:23.035886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.578 #30 NEW cov: 11892 ft: 15128 corp: 24/1135b lim: 85 exec/s: 30 rss: 69Mb L: 57/75 MS: 1 ChangeBinInt- 00:08:40.578 [2024-10-01 14:28:23.085808] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.578 [2024-10-01 14:28:23.085840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.578 [2024-10-01 14:28:23.085935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:40.578 [2024-10-01 14:28:23.085953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.578 [2024-10-01 14:28:23.086027] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:40.578 [2024-10-01 14:28:23.086048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.836 #31 NEW cov: 11892 ft: 15139 corp: 25/1200b lim: 85 exec/s: 31 rss: 69Mb L: 65/75 MS: 1 CMP- DE: "\317l]=\006\341!\000"- 00:08:40.836 [2024-10-01 14:28:23.146140] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.836 [2024-10-01 14:28:23.146170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.836 [2024-10-01 14:28:23.146258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:40.836 [2024-10-01 14:28:23.146276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.836 [2024-10-01 14:28:23.146341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:40.836 [2024-10-01 14:28:23.146361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.836 #32 NEW cov: 11892 ft: 15147 corp: 26/1251b lim: 85 exec/s: 32 rss: 69Mb L: 51/75 MS: 1 EraseBytes- 00:08:40.836 [2024-10-01 14:28:23.206371] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.836 [2024-10-01 14:28:23.206401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.836 [2024-10-01 14:28:23.206482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:40.836 [2024-10-01 14:28:23.206501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.836 [2024-10-01 14:28:23.206562] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:40.836 [2024-10-01 14:28:23.206578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.836 #33 NEW cov: 11892 ft: 15154 corp: 27/1313b lim: 85 exec/s: 33 rss: 70Mb L: 62/75 MS: 1 InsertByte- 00:08:40.836 [2024-10-01 14:28:23.265858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.836 [2024-10-01 14:28:23.265886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.836 #43 NEW cov: 11892 ft: 15170 corp: 28/1336b lim: 85 exec/s: 43 rss: 70Mb L: 23/75 MS: 5 ChangeByte-PersAutoDict-ChangeByte-ChangeByte-InsertRepeatedBytes- DE: "\317l]=\006\341!\000"- 00:08:40.836 [2024-10-01 14:28:23.317112] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.836 [2024-10-01 14:28:23.317140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.836 [2024-10-01 14:28:23.317212] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:40.836 [2024-10-01 14:28:23.317230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.836 [2024-10-01 14:28:23.317317] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:40.836 [2024-10-01 14:28:23.317336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.836 [2024-10-01 14:28:23.317430] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:40.836 [2024-10-01 14:28:23.317450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:40.836 #44 NEW cov: 11892 ft: 15172 corp: 29/1411b lim: 85 exec/s: 44 rss: 70Mb L: 75/75 MS: 1 PersAutoDict- DE: "\317l]=\006\341!\000"- 00:08:41.093 [2024-10-01 14:28:23.377337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:41.094 [2024-10-01 14:28:23.377365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:41.094 [2024-10-01 14:28:23.377457] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:41.094 [2024-10-01 14:28:23.377475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:41.094 [2024-10-01 14:28:23.377532] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:41.094 [2024-10-01 14:28:23.377546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:41.094 [2024-10-01 14:28:23.377641] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:41.094 [2024-10-01 14:28:23.377659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:41.094 [2024-10-01 14:28:23.427554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:41.094 [2024-10-01 14:28:23.427584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:41.094 [2024-10-01 14:28:23.427675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:41.094 [2024-10-01 14:28:23.427693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:41.094 [2024-10-01 14:28:23.427784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:41.094 [2024-10-01 14:28:23.427799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:41.094 [2024-10-01 14:28:23.427898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:41.094 [2024-10-01 14:28:23.427915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:41.094 #46 NEW cov: 11892 ft: 15175 corp: 30/1490b lim: 85 exec/s: 46 rss: 70Mb L: 79/79 MS: 2 InsertRepeatedBytes-PersAutoDict- DE: "|\253'\345\005\341!\000"- 00:08:41.094 [2024-10-01 14:28:23.477425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:41.094 [2024-10-01 14:28:23.477454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:41.094 [2024-10-01 14:28:23.477524] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:41.094 [2024-10-01 14:28:23.477545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:41.094 [2024-10-01 14:28:23.477613] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:41.094 [2024-10-01 14:28:23.477630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:41.094 #47 NEW cov: 11892 ft: 15252 corp: 31/1550b lim: 85 exec/s: 47 rss: 70Mb L: 60/79 MS: 1 ChangeByte- 00:08:41.094 [2024-10-01 14:28:23.536997] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:41.094 [2024-10-01 14:28:23.537022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:41.094 #48 NEW cov: 11892 ft: 15275 corp: 32/1574b lim: 85 exec/s: 24 rss: 70Mb L: 24/79 MS: 1 PersAutoDict- DE: "\317l]=\006\341!\000"- 00:08:41.094 #48 DONE cov: 11892 ft: 15275 corp: 32/1574b lim: 85 exec/s: 24 rss: 70Mb 00:08:41.094 ###### Recommended dictionary. ###### 00:08:41.094 "|\253'\345\005\341!\000" # Uses: 2 00:08:41.094 "\317l]=\006\341!\000" # Uses: 5 00:08:41.094 ###### End of recommended dictionary. ###### 00:08:41.094 Done 48 runs in 2 second(s) 00:08:41.351 14:28:23 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_22.conf 00:08:41.351 14:28:23 -- ../common.sh@72 -- # (( i++ )) 00:08:41.351 14:28:23 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:41.351 14:28:23 -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:08:41.351 14:28:23 -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:08:41.351 14:28:23 -- nvmf/run.sh@24 -- # local timen=1 00:08:41.351 14:28:23 -- nvmf/run.sh@25 -- # local core=0x1 00:08:41.351 14:28:23 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:41.351 14:28:23 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:08:41.351 14:28:23 -- nvmf/run.sh@29 -- # printf %02d 23 00:08:41.351 14:28:23 -- nvmf/run.sh@29 -- # port=4423 00:08:41.351 14:28:23 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:41.351 14:28:23 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:08:41.351 14:28:23 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:41.351 14:28:23 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 -r /var/tmp/spdk23.sock 00:08:41.351 [2024-10-01 14:28:23.732435] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:41.351 [2024-10-01 14:28:23.732503] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid709015 ] 00:08:41.351 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.609 [2024-10-01 14:28:24.026592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.609 [2024-10-01 14:28:24.109991] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:41.609 [2024-10-01 14:28:24.110130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.865 [2024-10-01 14:28:24.168836] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.865 [2024-10-01 14:28:24.185060] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:08:41.865 INFO: Running with entropic power schedule (0xFF, 100). 00:08:41.865 INFO: Seed: 2598398680 00:08:41.865 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:41.865 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:41.865 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:41.865 INFO: A corpus is not provided, starting from an empty corpus 00:08:41.866 #2 INITED exec/s: 0 rss: 61Mb 00:08:41.866 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:41.866 This may also happen if the target rejected all inputs we tried so far 00:08:41.866 [2024-10-01 14:28:24.240325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:41.866 [2024-10-01 14:28:24.240360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:41.866 [2024-10-01 14:28:24.240424] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:41.866 [2024-10-01 14:28:24.240441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.123 NEW_FUNC[1/671]: 0x465118 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:08:42.123 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:42.123 #4 NEW cov: 11598 ft: 11573 corp: 2/13b lim: 25 exec/s: 0 rss: 68Mb L: 12/12 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:42.123 [2024-10-01 14:28:24.561180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.123 [2024-10-01 14:28:24.561243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.123 #5 NEW cov: 11711 ft: 12743 corp: 3/19b lim: 25 exec/s: 0 rss: 68Mb L: 6/12 MS: 1 CrossOver- 00:08:42.123 [2024-10-01 14:28:24.611200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.123 [2024-10-01 14:28:24.611230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.123 [2024-10-01 14:28:24.611306] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.123 [2024-10-01 14:28:24.611321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.123 #6 NEW cov: 11717 ft: 12943 corp: 4/31b lim: 25 exec/s: 0 rss: 68Mb L: 12/12 MS: 1 ChangeASCIIInt- 00:08:42.381 [2024-10-01 14:28:24.651331] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.381 [2024-10-01 14:28:24.651360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.381 [2024-10-01 14:28:24.651418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.381 [2024-10-01 14:28:24.651433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.381 #7 NEW cov: 11802 ft: 13138 corp: 5/43b lim: 25 exec/s: 0 rss: 68Mb L: 12/12 MS: 1 ShuffleBytes- 00:08:42.381 [2024-10-01 14:28:24.691427] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.381 [2024-10-01 14:28:24.691454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.381 [2024-10-01 14:28:24.691512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.381 [2024-10-01 14:28:24.691528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.381 #8 NEW cov: 11802 ft: 13207 corp: 6/55b lim: 25 exec/s: 0 rss: 68Mb L: 12/12 MS: 1 ShuffleBytes- 00:08:42.381 [2024-10-01 14:28:24.731625] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.381 [2024-10-01 14:28:24.731655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.381 [2024-10-01 14:28:24.731692] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.381 [2024-10-01 14:28:24.731708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.381 [2024-10-01 14:28:24.731768] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:42.381 [2024-10-01 14:28:24.731782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.381 #9 NEW cov: 11802 ft: 13535 corp: 7/74b lim: 25 exec/s: 0 rss: 68Mb L: 19/19 MS: 1 CopyPart- 00:08:42.381 [2024-10-01 14:28:24.771632] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.381 [2024-10-01 14:28:24.771659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.381 [2024-10-01 14:28:24.771704] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.381 [2024-10-01 14:28:24.771724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.381 #10 NEW cov: 11802 ft: 13656 corp: 8/87b lim: 25 exec/s: 0 rss: 68Mb L: 13/19 MS: 1 InsertByte- 00:08:42.381 [2024-10-01 14:28:24.811759] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.381 [2024-10-01 14:28:24.811785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.381 [2024-10-01 14:28:24.811827] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.381 [2024-10-01 14:28:24.811842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.381 #11 NEW cov: 11802 ft: 13702 corp: 9/99b lim: 25 exec/s: 0 rss: 68Mb L: 12/19 MS: 1 ChangeByte- 00:08:42.381 [2024-10-01 14:28:24.851855] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.381 [2024-10-01 14:28:24.851882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.381 [2024-10-01 14:28:24.851925] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.381 [2024-10-01 14:28:24.851940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.381 #12 NEW cov: 11802 ft: 13740 corp: 10/111b lim: 25 exec/s: 0 rss: 68Mb L: 12/19 MS: 1 ChangeASCIIInt- 00:08:42.381 [2024-10-01 14:28:24.892004] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.381 [2024-10-01 14:28:24.892030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.381 [2024-10-01 14:28:24.892071] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.381 [2024-10-01 14:28:24.892087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.638 #13 NEW cov: 11802 ft: 13813 corp: 11/125b lim: 25 exec/s: 0 rss: 69Mb L: 14/19 MS: 1 InsertRepeatedBytes- 00:08:42.638 [2024-10-01 14:28:24.932142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.638 [2024-10-01 14:28:24.932170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.638 [2024-10-01 14:28:24.932224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.638 [2024-10-01 14:28:24.932239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.638 #19 NEW cov: 11802 ft: 13827 corp: 12/137b lim: 25 exec/s: 0 rss: 69Mb L: 12/19 MS: 1 ChangeBit- 00:08:42.638 [2024-10-01 14:28:24.972237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.638 [2024-10-01 14:28:24.972265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.638 [2024-10-01 14:28:24.972320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.638 [2024-10-01 14:28:24.972335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.638 #20 NEW cov: 11802 ft: 13902 corp: 13/149b lim: 25 exec/s: 0 rss: 69Mb L: 12/19 MS: 1 ChangeBinInt- 00:08:42.638 [2024-10-01 14:28:25.012376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.638 [2024-10-01 14:28:25.012403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.638 [2024-10-01 14:28:25.012445] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.638 [2024-10-01 14:28:25.012461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.638 #21 NEW cov: 11802 ft: 13978 corp: 14/161b lim: 25 exec/s: 0 rss: 69Mb L: 12/19 MS: 1 CrossOver- 00:08:42.638 [2024-10-01 14:28:25.052475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.638 [2024-10-01 14:28:25.052502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.639 [2024-10-01 14:28:25.052547] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.639 [2024-10-01 14:28:25.052563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.639 #22 NEW cov: 11802 ft: 13993 corp: 15/173b lim: 25 exec/s: 0 rss: 69Mb L: 12/19 MS: 1 ChangeBit- 00:08:42.639 [2024-10-01 14:28:25.092633] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.639 [2024-10-01 14:28:25.092661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.639 [2024-10-01 14:28:25.092725] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.639 [2024-10-01 14:28:25.092743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.639 #23 NEW cov: 11802 ft: 14012 corp: 16/185b lim: 25 exec/s: 0 rss: 69Mb L: 12/19 MS: 1 ChangeASCIIInt- 00:08:42.639 [2024-10-01 14:28:25.132816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.639 [2024-10-01 14:28:25.132845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.639 [2024-10-01 14:28:25.132911] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.639 [2024-10-01 14:28:25.132927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.639 [2024-10-01 14:28:25.132986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:42.639 [2024-10-01 14:28:25.133002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.639 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:42.639 #29 NEW cov: 11825 ft: 14120 corp: 17/203b lim: 25 exec/s: 0 rss: 69Mb L: 18/19 MS: 1 InsertRepeatedBytes- 00:08:42.896 [2024-10-01 14:28:25.172865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.896 [2024-10-01 14:28:25.172895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.896 [2024-10-01 14:28:25.172954] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.896 [2024-10-01 14:28:25.172970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.896 #30 NEW cov: 11825 ft: 14174 corp: 18/215b lim: 25 exec/s: 0 rss: 69Mb L: 12/19 MS: 1 ChangeBit- 00:08:42.896 [2024-10-01 14:28:25.213076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.896 [2024-10-01 14:28:25.213104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.896 [2024-10-01 14:28:25.213142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.896 [2024-10-01 14:28:25.213159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.896 [2024-10-01 14:28:25.213215] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:42.896 [2024-10-01 14:28:25.213231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.896 #31 NEW cov: 11825 ft: 14191 corp: 19/233b lim: 25 exec/s: 31 rss: 69Mb L: 18/19 MS: 1 InsertRepeatedBytes- 00:08:42.896 [2024-10-01 14:28:25.263339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.896 [2024-10-01 14:28:25.263368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.896 [2024-10-01 14:28:25.263411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.896 [2024-10-01 14:28:25.263427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.896 [2024-10-01 14:28:25.263483] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:42.896 [2024-10-01 14:28:25.263500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.896 [2024-10-01 14:28:25.263557] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:42.896 [2024-10-01 14:28:25.263572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:42.896 #32 NEW cov: 11825 ft: 14615 corp: 20/256b lim: 25 exec/s: 32 rss: 69Mb L: 23/23 MS: 1 CMP- DE: "\000\000\000\010"- 00:08:42.896 [2024-10-01 14:28:25.313378] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.896 [2024-10-01 14:28:25.313407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.896 [2024-10-01 14:28:25.313445] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.896 [2024-10-01 14:28:25.313461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.896 [2024-10-01 14:28:25.313519] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:42.896 [2024-10-01 14:28:25.313534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.896 #33 NEW cov: 11825 ft: 14656 corp: 21/272b lim: 25 exec/s: 33 rss: 69Mb L: 16/23 MS: 1 InsertRepeatedBytes- 00:08:42.896 [2024-10-01 14:28:25.353347] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.896 [2024-10-01 14:28:25.353375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.896 [2024-10-01 14:28:25.353437] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.896 [2024-10-01 14:28:25.353452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.896 #34 NEW cov: 11825 ft: 14671 corp: 22/284b lim: 25 exec/s: 34 rss: 69Mb L: 12/23 MS: 1 CopyPart- 00:08:42.896 [2024-10-01 14:28:25.393347] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.896 [2024-10-01 14:28:25.393374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.896 #35 NEW cov: 11825 ft: 14696 corp: 23/292b lim: 25 exec/s: 35 rss: 69Mb L: 8/23 MS: 1 CrossOver- 00:08:43.154 [2024-10-01 14:28:25.433592] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.154 [2024-10-01 14:28:25.433619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.154 [2024-10-01 14:28:25.433676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:43.154 [2024-10-01 14:28:25.433692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.154 #36 NEW cov: 11825 ft: 14783 corp: 24/305b lim: 25 exec/s: 36 rss: 69Mb L: 13/23 MS: 1 InsertByte- 00:08:43.154 [2024-10-01 14:28:25.473595] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.154 [2024-10-01 14:28:25.473622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.154 #37 NEW cov: 11825 ft: 14809 corp: 25/314b lim: 25 exec/s: 37 rss: 69Mb L: 9/23 MS: 1 InsertByte- 00:08:43.154 [2024-10-01 14:28:25.513818] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.154 [2024-10-01 14:28:25.513845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.154 [2024-10-01 14:28:25.513913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:43.154 [2024-10-01 14:28:25.513930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.154 #38 NEW cov: 11825 ft: 14816 corp: 26/326b lim: 25 exec/s: 38 rss: 69Mb L: 12/23 MS: 1 ChangeASCIIInt- 00:08:43.154 [2024-10-01 14:28:25.553830] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.154 [2024-10-01 14:28:25.553856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.154 #39 NEW cov: 11825 ft: 14848 corp: 27/334b lim: 25 exec/s: 39 rss: 69Mb L: 8/23 MS: 1 ChangeBit- 00:08:43.154 [2024-10-01 14:28:25.593935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.154 [2024-10-01 14:28:25.593961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.154 #40 NEW cov: 11825 ft: 14890 corp: 28/342b lim: 25 exec/s: 40 rss: 69Mb L: 8/23 MS: 1 PersAutoDict- DE: "\000\000\000\010"- 00:08:43.154 [2024-10-01 14:28:25.634143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.154 [2024-10-01 14:28:25.634171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.154 [2024-10-01 14:28:25.634218] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:43.154 [2024-10-01 14:28:25.634232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.154 #41 NEW cov: 11825 ft: 14955 corp: 29/356b lim: 25 exec/s: 41 rss: 69Mb L: 14/23 MS: 1 ChangeBit- 00:08:43.154 [2024-10-01 14:28:25.674387] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.154 [2024-10-01 14:28:25.674414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.154 [2024-10-01 14:28:25.674457] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:43.154 [2024-10-01 14:28:25.674472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.154 [2024-10-01 14:28:25.674529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:43.154 [2024-10-01 14:28:25.674544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:43.412 #42 NEW cov: 11825 ft: 14959 corp: 30/374b lim: 25 exec/s: 42 rss: 69Mb L: 18/23 MS: 1 CopyPart- 00:08:43.412 [2024-10-01 14:28:25.714369] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.412 [2024-10-01 14:28:25.714396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.412 [2024-10-01 14:28:25.714450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:43.412 [2024-10-01 14:28:25.714466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.412 #43 NEW cov: 11825 ft: 14982 corp: 31/386b lim: 25 exec/s: 43 rss: 69Mb L: 12/23 MS: 1 CrossOver- 00:08:43.412 [2024-10-01 14:28:25.754480] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.412 [2024-10-01 14:28:25.754507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.412 [2024-10-01 14:28:25.754561] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:43.412 [2024-10-01 14:28:25.754577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.412 #44 NEW cov: 11825 ft: 14991 corp: 32/398b lim: 25 exec/s: 44 rss: 70Mb L: 12/23 MS: 1 ShuffleBytes- 00:08:43.412 [2024-10-01 14:28:25.794504] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.412 [2024-10-01 14:28:25.794531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.412 #45 NEW cov: 11825 ft: 14994 corp: 33/406b lim: 25 exec/s: 45 rss: 70Mb L: 8/23 MS: 1 ChangeByte- 00:08:43.412 [2024-10-01 14:28:25.834730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.412 [2024-10-01 14:28:25.834757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.412 [2024-10-01 14:28:25.834816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:43.412 [2024-10-01 14:28:25.834833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.412 #46 NEW cov: 11825 ft: 15046 corp: 34/418b lim: 25 exec/s: 46 rss: 70Mb L: 12/23 MS: 1 ChangeASCIIInt- 00:08:43.412 [2024-10-01 14:28:25.874828] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.412 [2024-10-01 14:28:25.874853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.412 [2024-10-01 14:28:25.874911] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:43.412 [2024-10-01 14:28:25.874927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.412 #47 NEW cov: 11825 ft: 15093 corp: 35/431b lim: 25 exec/s: 47 rss: 70Mb L: 13/23 MS: 1 ChangeByte- 00:08:43.412 [2024-10-01 14:28:25.914974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.412 [2024-10-01 14:28:25.915001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.412 [2024-10-01 14:28:25.915041] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:43.412 [2024-10-01 14:28:25.915057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.413 #48 NEW cov: 11825 ft: 15105 corp: 36/444b lim: 25 exec/s: 48 rss: 70Mb L: 13/23 MS: 1 InsertByte- 00:08:43.670 [2024-10-01 14:28:25.955202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.670 [2024-10-01 14:28:25.955229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.670 [2024-10-01 14:28:25.955268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:43.670 [2024-10-01 14:28:25.955284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.670 [2024-10-01 14:28:25.955340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:43.670 [2024-10-01 14:28:25.955355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:43.670 #49 NEW cov: 11825 ft: 15144 corp: 37/460b lim: 25 exec/s: 49 rss: 70Mb L: 16/23 MS: 1 InsertRepeatedBytes- 00:08:43.670 [2024-10-01 14:28:25.995096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.670 [2024-10-01 14:28:25.995123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.670 #50 NEW cov: 11825 ft: 15156 corp: 38/468b lim: 25 exec/s: 50 rss: 70Mb L: 8/23 MS: 1 ShuffleBytes- 00:08:43.670 [2024-10-01 14:28:26.035499] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.670 [2024-10-01 14:28:26.035525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.670 [2024-10-01 14:28:26.035593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:43.670 [2024-10-01 14:28:26.035610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.670 [2024-10-01 14:28:26.035664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:43.670 [2024-10-01 14:28:26.035679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:43.670 [2024-10-01 14:28:26.035736] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:43.670 [2024-10-01 14:28:26.035752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:43.670 #51 NEW cov: 11825 ft: 15161 corp: 39/491b lim: 25 exec/s: 51 rss: 70Mb L: 23/23 MS: 1 InsertRepeatedBytes- 00:08:43.670 [2024-10-01 14:28:26.075315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.670 [2024-10-01 14:28:26.075342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.670 #52 NEW cov: 11825 ft: 15189 corp: 40/500b lim: 25 exec/s: 52 rss: 70Mb L: 9/23 MS: 1 ChangeBinInt- 00:08:43.670 [2024-10-01 14:28:26.115522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.670 [2024-10-01 14:28:26.115548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.670 [2024-10-01 14:28:26.115605] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:43.670 [2024-10-01 14:28:26.115621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.670 #53 NEW cov: 11825 ft: 15193 corp: 41/512b lim: 25 exec/s: 53 rss: 70Mb L: 12/23 MS: 1 ChangeASCIIInt- 00:08:43.670 [2024-10-01 14:28:26.155522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.670 [2024-10-01 14:28:26.155561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.670 #54 NEW cov: 11825 ft: 15202 corp: 42/518b lim: 25 exec/s: 54 rss: 70Mb L: 6/23 MS: 1 ChangeBit- 00:08:43.928 [2024-10-01 14:28:26.195986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.928 [2024-10-01 14:28:26.196013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.928 [2024-10-01 14:28:26.196065] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:43.928 [2024-10-01 14:28:26.196080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.928 [2024-10-01 14:28:26.196133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:43.928 [2024-10-01 14:28:26.196149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:43.928 [2024-10-01 14:28:26.196205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:43.928 [2024-10-01 14:28:26.196220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:43.928 #55 NEW cov: 11825 ft: 15221 corp: 43/539b lim: 25 exec/s: 55 rss: 70Mb L: 21/23 MS: 1 InsertRepeatedBytes- 00:08:43.928 [2024-10-01 14:28:26.245776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.928 [2024-10-01 14:28:26.245804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.928 #56 NEW cov: 11825 ft: 15229 corp: 44/547b lim: 25 exec/s: 28 rss: 70Mb L: 8/23 MS: 1 ChangeASCIIInt- 00:08:43.928 #56 DONE cov: 11825 ft: 15229 corp: 44/547b lim: 25 exec/s: 28 rss: 70Mb 00:08:43.928 ###### Recommended dictionary. ###### 00:08:43.928 "\000\000\000\010" # Uses: 1 00:08:43.928 ###### End of recommended dictionary. ###### 00:08:43.928 Done 56 runs in 2 second(s) 00:08:43.928 14:28:26 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_23.conf 00:08:43.928 14:28:26 -- ../common.sh@72 -- # (( i++ )) 00:08:43.928 14:28:26 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:43.928 14:28:26 -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:08:43.928 14:28:26 -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:08:43.928 14:28:26 -- nvmf/run.sh@24 -- # local timen=1 00:08:43.928 14:28:26 -- nvmf/run.sh@25 -- # local core=0x1 00:08:43.928 14:28:26 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:43.928 14:28:26 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:08:43.928 14:28:26 -- nvmf/run.sh@29 -- # printf %02d 24 00:08:43.928 14:28:26 -- nvmf/run.sh@29 -- # port=4424 00:08:43.928 14:28:26 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:43.928 14:28:26 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:08:43.928 14:28:26 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:43.928 14:28:26 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 -r /var/tmp/spdk24.sock 00:08:44.185 [2024-10-01 14:28:26.464146] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:44.185 [2024-10-01 14:28:26.464242] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid709386 ] 00:08:44.185 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.442 [2024-10-01 14:28:26.775023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.442 [2024-10-01 14:28:26.859492] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:44.442 [2024-10-01 14:28:26.859636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.442 [2024-10-01 14:28:26.918180] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.443 [2024-10-01 14:28:26.934386] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:08:44.443 INFO: Running with entropic power schedule (0xFF, 100). 00:08:44.443 INFO: Seed: 1051417665 00:08:44.701 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:44.701 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:44.701 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:44.701 INFO: A corpus is not provided, starting from an empty corpus 00:08:44.701 #2 INITED exec/s: 0 rss: 61Mb 00:08:44.701 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:44.701 This may also happen if the target rejected all inputs we tried so far 00:08:44.701 [2024-10-01 14:28:26.983428] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.701 [2024-10-01 14:28:26.983460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.701 [2024-10-01 14:28:26.983513] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.701 [2024-10-01 14:28:26.983529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:44.701 [2024-10-01 14:28:26.983580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.701 [2024-10-01 14:28:26.983595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:44.701 [2024-10-01 14:28:26.983648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.701 [2024-10-01 14:28:26.983663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:44.959 NEW_FUNC[1/672]: 0x466208 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:08:44.959 NEW_FUNC[2/672]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:44.959 #11 NEW cov: 11670 ft: 11671 corp: 2/82b lim: 100 exec/s: 0 rss: 68Mb L: 81/81 MS: 4 ChangeBit-ChangeBit-InsertRepeatedBytes-InsertRepeatedBytes- 00:08:44.959 [2024-10-01 14:28:27.303926] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:503447552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.959 [2024-10-01 14:28:27.303964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.959 [2024-10-01 14:28:27.304022] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.959 [2024-10-01 14:28:27.304038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:44.959 #16 NEW cov: 11783 ft: 12586 corp: 3/126b lim: 100 exec/s: 0 rss: 68Mb L: 44/81 MS: 5 CMP-CMP-CrossOver-ChangeByte-InsertRepeatedBytes- DE: "\036\234\000@\017\177\000\000"-"\002\000\000\000\000\000\000\000"- 00:08:44.959 [2024-10-01 14:28:27.343975] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:503447552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.959 [2024-10-01 14:28:27.344004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.959 [2024-10-01 14:28:27.344057] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.959 [2024-10-01 14:28:27.344073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:44.959 #22 NEW cov: 11789 ft: 12742 corp: 4/170b lim: 100 exec/s: 0 rss: 69Mb L: 44/81 MS: 1 PersAutoDict- DE: "\002\000\000\000\000\000\000\000"- 00:08:44.959 [2024-10-01 14:28:27.384094] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:503447552 len:2709 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.959 [2024-10-01 14:28:27.384122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.959 [2024-10-01 14:28:27.384169] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.959 [2024-10-01 14:28:27.384186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:44.959 #23 NEW cov: 11874 ft: 13013 corp: 5/216b lim: 100 exec/s: 0 rss: 69Mb L: 46/81 MS: 1 CopyPart- 00:08:44.959 [2024-10-01 14:28:27.434214] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:503447552 len:2709 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.959 [2024-10-01 14:28:27.434243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.959 [2024-10-01 14:28:27.434290] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.959 [2024-10-01 14:28:27.434305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:44.959 #24 NEW cov: 11874 ft: 13160 corp: 6/262b lim: 100 exec/s: 0 rss: 69Mb L: 46/81 MS: 1 ChangeBit- 00:08:44.959 [2024-10-01 14:28:27.474343] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:562950456868864 len:2709 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.959 [2024-10-01 14:28:27.474370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.959 [2024-10-01 14:28:27.474411] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.959 [2024-10-01 14:28:27.474427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.215 #25 NEW cov: 11874 ft: 13197 corp: 7/308b lim: 100 exec/s: 0 rss: 69Mb L: 46/81 MS: 1 CMP- DE: "\002\000"- 00:08:45.215 [2024-10-01 14:28:27.514451] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:503447552 len:2709 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.215 [2024-10-01 14:28:27.514481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.215 [2024-10-01 14:28:27.514531] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.215 [2024-10-01 14:28:27.514546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.215 #26 NEW cov: 11874 ft: 13311 corp: 8/354b lim: 100 exec/s: 0 rss: 69Mb L: 46/81 MS: 1 ChangeByte- 00:08:45.215 [2024-10-01 14:28:27.554524] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:503447552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.215 [2024-10-01 14:28:27.554551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.215 [2024-10-01 14:28:27.554603] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.215 [2024-10-01 14:28:27.554620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.215 #27 NEW cov: 11874 ft: 13340 corp: 9/399b lim: 100 exec/s: 0 rss: 69Mb L: 45/81 MS: 1 InsertByte- 00:08:45.215 [2024-10-01 14:28:27.594641] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1085102592487264015 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.215 [2024-10-01 14:28:27.594667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.215 [2024-10-01 14:28:27.594708] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.216 [2024-10-01 14:28:27.594727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.216 #28 NEW cov: 11874 ft: 13403 corp: 10/452b lim: 100 exec/s: 0 rss: 69Mb L: 53/81 MS: 1 InsertRepeatedBytes- 00:08:45.216 [2024-10-01 14:28:27.635066] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:503447552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.216 [2024-10-01 14:28:27.635093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.216 [2024-10-01 14:28:27.635141] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.216 [2024-10-01 14:28:27.635157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.216 [2024-10-01 14:28:27.635209] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:131072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.216 [2024-10-01 14:28:27.635223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:45.216 [2024-10-01 14:28:27.635279] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.216 [2024-10-01 14:28:27.635294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:45.216 #29 NEW cov: 11874 ft: 13499 corp: 11/534b lim: 100 exec/s: 0 rss: 69Mb L: 82/82 MS: 1 CopyPart- 00:08:45.216 [2024-10-01 14:28:27.674806] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14612714909889200128 len:51915 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.216 [2024-10-01 14:28:27.674832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.216 #33 NEW cov: 11874 ft: 14321 corp: 12/569b lim: 100 exec/s: 0 rss: 69Mb L: 35/82 MS: 4 ShuffleBytes-CrossOver-CMP-InsertRepeatedBytes- DE: "\000\000\000\000\000\000\000\000"- 00:08:45.216 [2024-10-01 14:28:27.714996] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:503447552 len:2709 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.216 [2024-10-01 14:28:27.715022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.216 [2024-10-01 14:28:27.715058] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.216 [2024-10-01 14:28:27.715073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.216 #34 NEW cov: 11874 ft: 14350 corp: 13/624b lim: 100 exec/s: 0 rss: 69Mb L: 55/82 MS: 1 CrossOver- 00:08:45.473 [2024-10-01 14:28:27.755443] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.473 [2024-10-01 14:28:27.755472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.473 [2024-10-01 14:28:27.755516] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.473 [2024-10-01 14:28:27.755532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.473 [2024-10-01 14:28:27.755584] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:4294967296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.473 [2024-10-01 14:28:27.755600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:45.473 [2024-10-01 14:28:27.755654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.473 [2024-10-01 14:28:27.755670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:45.473 #35 NEW cov: 11874 ft: 14457 corp: 14/705b lim: 100 exec/s: 0 rss: 69Mb L: 81/82 MS: 1 ChangeBinInt- 00:08:45.473 [2024-10-01 14:28:27.805617] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4244635648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.473 [2024-10-01 14:28:27.805647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.473 [2024-10-01 14:28:27.805700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.473 [2024-10-01 14:28:27.805716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.473 [2024-10-01 14:28:27.805776] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.474 [2024-10-01 14:28:27.805792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:45.474 [2024-10-01 14:28:27.805848] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.474 [2024-10-01 14:28:27.805863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:45.474 #37 NEW cov: 11874 ft: 14468 corp: 15/802b lim: 100 exec/s: 0 rss: 69Mb L: 97/97 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:08:45.474 [2024-10-01 14:28:27.845358] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:33554432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.474 [2024-10-01 14:28:27.845385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.474 [2024-10-01 14:28:27.845442] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.474 [2024-10-01 14:28:27.845458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.474 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:45.474 #38 NEW cov: 11897 ft: 14530 corp: 16/847b lim: 100 exec/s: 0 rss: 69Mb L: 45/97 MS: 1 PersAutoDict- DE: "\002\000\000\000\000\000\000\000"- 00:08:45.474 [2024-10-01 14:28:27.885339] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14612714909889200128 len:51915 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.474 [2024-10-01 14:28:27.885366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.474 #39 NEW cov: 11897 ft: 14586 corp: 17/882b lim: 100 exec/s: 0 rss: 69Mb L: 35/97 MS: 1 ChangeBinInt- 00:08:45.474 [2024-10-01 14:28:27.935648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:33554432 len:769 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.474 [2024-10-01 14:28:27.935675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.474 [2024-10-01 14:28:27.935727] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.474 [2024-10-01 14:28:27.935759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.474 #40 NEW cov: 11897 ft: 14613 corp: 18/927b lim: 100 exec/s: 0 rss: 69Mb L: 45/97 MS: 1 ChangeBinInt- 00:08:45.474 [2024-10-01 14:28:27.976087] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:503447552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.474 [2024-10-01 14:28:27.976115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.474 [2024-10-01 14:28:27.976163] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4616047781056781312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.474 [2024-10-01 14:28:27.976179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.474 [2024-10-01 14:28:27.976229] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:131072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.474 [2024-10-01 14:28:27.976243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:45.474 [2024-10-01 14:28:27.976298] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.474 [2024-10-01 14:28:27.976313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:45.731 #41 NEW cov: 11897 ft: 14623 corp: 19/1009b lim: 100 exec/s: 41 rss: 69Mb L: 82/97 MS: 1 PersAutoDict- DE: "\036\234\000@\017\177\000\000"- 00:08:45.731 [2024-10-01 14:28:28.026238] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:503447552 len:2709 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.731 [2024-10-01 14:28:28.026266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.731 [2024-10-01 14:28:28.026314] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.731 [2024-10-01 14:28:28.026330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.731 [2024-10-01 14:28:28.026383] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.731 [2024-10-01 14:28:28.026401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:45.731 [2024-10-01 14:28:28.026455] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:39353892653568 len:51915 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.731 [2024-10-01 14:28:28.026470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:45.731 #42 NEW cov: 11897 ft: 14695 corp: 20/1095b lim: 100 exec/s: 42 rss: 69Mb L: 86/97 MS: 1 CrossOver- 00:08:45.731 [2024-10-01 14:28:28.066021] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14612714909889200128 len:51915 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.731 [2024-10-01 14:28:28.066047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.731 [2024-10-01 14:28:28.066083] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:14612714913291487946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.731 [2024-10-01 14:28:28.066099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.731 #43 NEW cov: 11897 ft: 14704 corp: 21/1138b lim: 100 exec/s: 43 rss: 69Mb L: 43/97 MS: 1 PersAutoDict- DE: "\002\000\000\000\000\000\000\000"- 00:08:45.731 [2024-10-01 14:28:28.106104] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:562950456868864 len:2709 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.731 [2024-10-01 14:28:28.106131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.731 [2024-10-01 14:28:28.106182] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:100663296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.731 [2024-10-01 14:28:28.106198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.731 #44 NEW cov: 11897 ft: 14717 corp: 22/1184b lim: 100 exec/s: 44 rss: 69Mb L: 46/97 MS: 1 ChangeByte- 00:08:45.731 [2024-10-01 14:28:28.146260] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:503447552 len:2709 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.731 [2024-10-01 14:28:28.146287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.731 [2024-10-01 14:28:28.146322] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.731 [2024-10-01 14:28:28.146338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.731 #45 NEW cov: 11897 ft: 14728 corp: 23/1230b lim: 100 exec/s: 45 rss: 69Mb L: 46/97 MS: 1 ChangeByte- 00:08:45.731 [2024-10-01 14:28:28.186315] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:503447552 len:2709 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.731 [2024-10-01 14:28:28.186341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.731 [2024-10-01 14:28:28.186379] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.731 [2024-10-01 14:28:28.186395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.731 #46 NEW cov: 11897 ft: 14738 corp: 24/1276b lim: 100 exec/s: 46 rss: 70Mb L: 46/97 MS: 1 ChangeByte- 00:08:45.731 [2024-10-01 14:28:28.226825] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.731 [2024-10-01 14:28:28.226851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.731 [2024-10-01 14:28:28.226907] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.731 [2024-10-01 14:28:28.226923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.731 [2024-10-01 14:28:28.226976] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446462603027808255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.731 [2024-10-01 14:28:28.226992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:45.731 [2024-10-01 14:28:28.227044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.731 [2024-10-01 14:28:28.227060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:45.731 #47 NEW cov: 11897 ft: 14743 corp: 25/1363b lim: 100 exec/s: 47 rss: 70Mb L: 87/97 MS: 1 InsertRepeatedBytes- 00:08:45.989 [2024-10-01 14:28:28.266559] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:503447552 len:2709 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.989 [2024-10-01 14:28:28.266586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.989 [2024-10-01 14:28:28.266636] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.989 [2024-10-01 14:28:28.266651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.989 #48 NEW cov: 11897 ft: 14754 corp: 26/1409b lim: 100 exec/s: 48 rss: 70Mb L: 46/97 MS: 1 ShuffleBytes- 00:08:45.989 [2024-10-01 14:28:28.307034] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:503447552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.989 [2024-10-01 14:28:28.307063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.989 [2024-10-01 14:28:28.307116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.989 [2024-10-01 14:28:28.307133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.989 [2024-10-01 14:28:28.307189] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:131072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.989 [2024-10-01 14:28:28.307204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:45.989 [2024-10-01 14:28:28.307257] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.989 [2024-10-01 14:28:28.307273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:45.989 #49 NEW cov: 11897 ft: 14766 corp: 27/1491b lim: 100 exec/s: 49 rss: 70Mb L: 82/97 MS: 1 ChangeBit- 00:08:45.989 [2024-10-01 14:28:28.346960] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16638239752757634790 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.989 [2024-10-01 14:28:28.346988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.989 [2024-10-01 14:28:28.347024] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:16638239752757634790 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.989 [2024-10-01 14:28:28.347040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.989 [2024-10-01 14:28:28.347098] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:16638239752757634790 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.989 [2024-10-01 14:28:28.347130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:45.989 #50 NEW cov: 11897 ft: 15046 corp: 28/1566b lim: 100 exec/s: 50 rss: 70Mb L: 75/97 MS: 1 InsertRepeatedBytes- 00:08:45.989 [2024-10-01 14:28:28.387109] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16638239752757634790 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.989 [2024-10-01 14:28:28.387136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.989 [2024-10-01 14:28:28.387172] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:16638239752757634790 len:58983 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.989 [2024-10-01 14:28:28.387187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.989 [2024-10-01 14:28:28.387241] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:16638239752757634790 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.989 [2024-10-01 14:28:28.387256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:45.989 #51 NEW cov: 11897 ft: 15057 corp: 29/1641b lim: 100 exec/s: 51 rss: 70Mb L: 75/97 MS: 1 ChangeBit- 00:08:45.989 [2024-10-01 14:28:28.426947] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1085102592487264015 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.989 [2024-10-01 14:28:28.426975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.989 #52 NEW cov: 11897 ft: 15076 corp: 30/1669b lim: 100 exec/s: 52 rss: 70Mb L: 28/97 MS: 1 EraseBytes- 00:08:45.989 [2024-10-01 14:28:28.477528] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:503447552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.989 [2024-10-01 14:28:28.477556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.989 [2024-10-01 14:28:28.477595] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4616047781056781312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.989 [2024-10-01 14:28:28.477610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.989 [2024-10-01 14:28:28.477663] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:131072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.989 [2024-10-01 14:28:28.477679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:45.989 [2024-10-01 14:28:28.477740] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:8615773115252736 len:32513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.989 [2024-10-01 14:28:28.477756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:45.990 #53 NEW cov: 11897 ft: 15084 corp: 31/1751b lim: 100 exec/s: 53 rss: 70Mb L: 82/97 MS: 1 PersAutoDict- DE: "\036\234\000@\017\177\000\000"- 00:08:46.247 [2024-10-01 14:28:28.517351] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16638239752757634790 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.247 [2024-10-01 14:28:28.517379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:46.247 [2024-10-01 14:28:28.517445] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:16638239752757634790 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.247 [2024-10-01 14:28:28.517465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:46.247 #54 NEW cov: 11897 ft: 15104 corp: 32/1791b lim: 100 exec/s: 54 rss: 70Mb L: 40/97 MS: 1 EraseBytes- 00:08:46.247 [2024-10-01 14:28:28.557728] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:503447552 len:2709 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.247 [2024-10-01 14:28:28.557755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:46.247 [2024-10-01 14:28:28.557803] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.247 [2024-10-01 14:28:28.557818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:46.247 [2024-10-01 14:28:28.557873] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.247 [2024-10-01 14:28:28.557886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:46.247 [2024-10-01 14:28:28.557940] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:39353892653568 len:51915 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.247 [2024-10-01 14:28:28.557956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:46.247 #55 NEW cov: 11897 ft: 15115 corp: 33/1877b lim: 100 exec/s: 55 rss: 70Mb L: 86/97 MS: 1 ChangeByte- 00:08:46.247 [2024-10-01 14:28:28.597838] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:503447552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.247 [2024-10-01 14:28:28.597864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:46.247 [2024-10-01 14:28:28.597908] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4616047781056781312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.247 [2024-10-01 14:28:28.597924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:46.247 [2024-10-01 14:28:28.597994] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:131072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.247 [2024-10-01 14:28:28.598011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:46.247 [2024-10-01 14:28:28.598066] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:8615773115252736 len:32513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.247 [2024-10-01 14:28:28.598082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:46.247 #56 NEW cov: 11897 ft: 15137 corp: 34/1959b lim: 100 exec/s: 56 rss: 70Mb L: 82/97 MS: 1 ChangeByte- 00:08:46.247 [2024-10-01 14:28:28.637656] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:503447552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.247 [2024-10-01 14:28:28.637682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:46.247 [2024-10-01 14:28:28.637734] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.247 [2024-10-01 14:28:28.637750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:46.247 #57 NEW cov: 11897 ft: 15188 corp: 35/2003b lim: 100 exec/s: 57 rss: 70Mb L: 44/97 MS: 1 CrossOver- 00:08:46.247 [2024-10-01 14:28:28.677732] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14612714909889200128 len:51915 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.247 [2024-10-01 14:28:28.677763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:46.247 [2024-10-01 14:28:28.677820] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:14612714913291487946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.247 [2024-10-01 14:28:28.677836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:46.248 #58 NEW cov: 11897 ft: 15262 corp: 36/2054b lim: 100 exec/s: 58 rss: 70Mb L: 51/97 MS: 1 CMP- DE: "NI\036\002\000\000\000\000"- 00:08:46.248 [2024-10-01 14:28:28.717846] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:503447552 len:2709 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.248 [2024-10-01 14:28:28.717874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:46.248 [2024-10-01 14:28:28.717916] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:134217728 len:256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.248 [2024-10-01 14:28:28.717932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:46.248 #59 NEW cov: 11897 ft: 15268 corp: 37/2100b lim: 100 exec/s: 59 rss: 70Mb L: 46/97 MS: 1 ChangeBit- 00:08:46.248 [2024-10-01 14:28:28.757993] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:503447552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.248 [2024-10-01 14:28:28.758019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:46.248 [2024-10-01 14:28:28.758058] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:162129589068365824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.248 [2024-10-01 14:28:28.758073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:46.506 #60 NEW cov: 11897 ft: 15282 corp: 38/2155b lim: 100 exec/s: 60 rss: 70Mb L: 55/97 MS: 1 InsertRepeatedBytes- 00:08:46.506 [2024-10-01 14:28:28.798380] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:503447552 len:2709 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.506 [2024-10-01 14:28:28.798408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:46.506 [2024-10-01 14:28:28.798454] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.506 [2024-10-01 14:28:28.798470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:46.506 [2024-10-01 14:28:28.798523] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:251658240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.506 [2024-10-01 14:28:28.798538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:46.506 [2024-10-01 14:28:28.798591] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:57121596157984768 len:51915 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.506 [2024-10-01 14:28:28.798606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:46.506 #61 NEW cov: 11897 ft: 15298 corp: 39/2241b lim: 100 exec/s: 61 rss: 70Mb L: 86/97 MS: 1 CopyPart- 00:08:46.506 [2024-10-01 14:28:28.838208] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1085102592487264015 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.506 [2024-10-01 14:28:28.838235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:46.506 [2024-10-01 14:28:28.838285] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.506 [2024-10-01 14:28:28.838305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:46.506 #62 NEW cov: 11897 ft: 15351 corp: 40/2284b lim: 100 exec/s: 62 rss: 70Mb L: 43/97 MS: 1 CopyPart- 00:08:46.506 [2024-10-01 14:28:28.878333] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:503447552 len:2709 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.506 [2024-10-01 14:28:28.878360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:46.506 [2024-10-01 14:28:28.878417] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.506 [2024-10-01 14:28:28.878434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:46.506 #63 NEW cov: 11897 ft: 15379 corp: 41/2340b lim: 100 exec/s: 63 rss: 70Mb L: 56/97 MS: 1 InsertByte- 00:08:46.506 [2024-10-01 14:28:28.918739] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.506 [2024-10-01 14:28:28.918766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:46.506 [2024-10-01 14:28:28.918836] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.506 [2024-10-01 14:28:28.918852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:46.506 [2024-10-01 14:28:28.918918] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.506 [2024-10-01 14:28:28.918932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:46.506 [2024-10-01 14:28:28.918988] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446465901562691583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.506 [2024-10-01 14:28:28.919003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:46.506 #64 NEW cov: 11897 ft: 15383 corp: 42/2429b lim: 100 exec/s: 64 rss: 70Mb L: 89/97 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\003"- 00:08:46.506 [2024-10-01 14:28:28.958906] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:503447552 len:2709 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.506 [2024-10-01 14:28:28.958932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:46.506 [2024-10-01 14:28:28.959001] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13839296737284394767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.506 [2024-10-01 14:28:28.959017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:46.506 [2024-10-01 14:28:28.959069] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.506 [2024-10-01 14:28:28.959084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:46.506 [2024-10-01 14:28:28.959137] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:153726143178 len:51915 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.506 [2024-10-01 14:28:28.959153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:46.506 #65 NEW cov: 11897 ft: 15384 corp: 43/2516b lim: 100 exec/s: 32 rss: 70Mb L: 87/97 MS: 1 InsertByte- 00:08:46.506 #65 DONE cov: 11897 ft: 15384 corp: 43/2516b lim: 100 exec/s: 32 rss: 70Mb 00:08:46.506 ###### Recommended dictionary. ###### 00:08:46.506 "\036\234\000@\017\177\000\000" # Uses: 2 00:08:46.506 "\002\000\000\000\000\000\000\000" # Uses: 3 00:08:46.506 "\002\000" # Uses: 0 00:08:46.506 "\000\000\000\000\000\000\000\000" # Uses: 0 00:08:46.506 "NI\036\002\000\000\000\000" # Uses: 0 00:08:46.506 "\377\377\377\377\377\377\377\003" # Uses: 0 00:08:46.506 ###### End of recommended dictionary. ###### 00:08:46.506 Done 65 runs in 2 second(s) 00:08:46.767 14:28:29 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_24.conf 00:08:46.767 14:28:29 -- ../common.sh@72 -- # (( i++ )) 00:08:46.767 14:28:29 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:46.767 14:28:29 -- nvmf/run.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:08:46.767 00:08:46.767 real 1m7.824s 00:08:46.767 user 1m41.617s 00:08:46.767 sys 0m9.828s 00:08:46.767 14:28:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:46.767 14:28:29 -- common/autotest_common.sh@10 -- # set +x 00:08:46.767 ************************************ 00:08:46.767 END TEST nvmf_fuzz 00:08:46.767 ************************************ 00:08:46.767 14:28:29 -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:08:46.767 14:28:29 -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:08:46.767 14:28:29 -- fuzz/llvm.sh@63 -- # run_test vfio_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:46.767 14:28:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:46.767 14:28:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:46.767 14:28:29 -- common/autotest_common.sh@10 -- # set +x 00:08:46.767 ************************************ 00:08:46.767 START TEST vfio_fuzz 00:08:46.767 ************************************ 00:08:46.767 14:28:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:46.767 * Looking for test storage... 00:08:46.767 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:46.767 14:28:29 -- vfio/run.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:08:46.767 14:28:29 -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:08:46.767 14:28:29 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:46.767 14:28:29 -- common/autotest_common.sh@34 -- # set -e 00:08:46.767 14:28:29 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:46.767 14:28:29 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:46.767 14:28:29 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:46.767 14:28:29 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:08:46.767 14:28:29 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:46.767 14:28:29 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:46.767 14:28:29 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:46.767 14:28:29 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:46.767 14:28:29 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:46.767 14:28:29 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:46.767 14:28:29 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:46.767 14:28:29 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:46.767 14:28:29 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:46.767 14:28:29 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:46.767 14:28:29 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:46.767 14:28:29 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:46.767 14:28:29 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:46.767 14:28:29 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:46.767 14:28:29 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:46.767 14:28:29 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:46.767 14:28:29 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:46.767 14:28:29 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:46.767 14:28:29 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:46.767 14:28:29 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:46.767 14:28:29 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:46.767 14:28:29 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:46.767 14:28:29 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:46.767 14:28:29 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:46.767 14:28:29 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:46.767 14:28:29 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:46.767 14:28:29 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:46.767 14:28:29 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:46.767 14:28:29 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:46.767 14:28:29 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:46.767 14:28:29 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:46.767 14:28:29 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:46.767 14:28:29 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:46.767 14:28:29 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:08:46.768 14:28:29 -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:08:46.768 14:28:29 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:46.768 14:28:29 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:46.768 14:28:29 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:46.768 14:28:29 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:46.768 14:28:29 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:46.768 14:28:29 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:46.768 14:28:29 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:46.768 14:28:29 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:46.768 14:28:29 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:46.768 14:28:29 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:46.768 14:28:29 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:46.768 14:28:29 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:46.768 14:28:29 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:46.768 14:28:29 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:46.768 14:28:29 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:46.768 14:28:29 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:08:46.768 14:28:29 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:46.768 14:28:29 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:46.768 14:28:29 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:46.768 14:28:29 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:46.768 14:28:29 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:46.768 14:28:29 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:46.768 14:28:29 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:08:46.768 14:28:29 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:46.768 14:28:29 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:46.768 14:28:29 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:08:46.768 14:28:29 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:46.768 14:28:29 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:46.768 14:28:29 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:08:46.768 14:28:29 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:46.768 14:28:29 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:46.768 14:28:29 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:46.768 14:28:29 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:08:46.768 14:28:29 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:46.768 14:28:29 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:46.768 14:28:29 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:46.768 14:28:29 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:46.768 14:28:29 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:46.768 14:28:29 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:46.768 14:28:29 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:46.768 14:28:29 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:46.768 14:28:29 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:46.768 14:28:29 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:46.768 14:28:29 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:46.768 14:28:29 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:46.768 14:28:29 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:46.768 14:28:29 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:46.768 14:28:29 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:46.768 14:28:29 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:46.768 14:28:29 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:46.768 14:28:29 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:46.768 14:28:29 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:46.768 14:28:29 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:46.768 14:28:29 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:46.768 14:28:29 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:46.768 14:28:29 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:46.768 14:28:29 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:46.768 14:28:29 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:46.768 14:28:29 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:08:46.768 14:28:29 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:46.768 #define SPDK_CONFIG_H 00:08:46.768 #define SPDK_CONFIG_APPS 1 00:08:46.768 #define SPDK_CONFIG_ARCH native 00:08:46.768 #undef SPDK_CONFIG_ASAN 00:08:46.768 #undef SPDK_CONFIG_AVAHI 00:08:46.768 #undef SPDK_CONFIG_CET 00:08:46.768 #define SPDK_CONFIG_COVERAGE 1 00:08:46.768 #define SPDK_CONFIG_CROSS_PREFIX 00:08:46.768 #undef SPDK_CONFIG_CRYPTO 00:08:46.768 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:46.768 #undef SPDK_CONFIG_CUSTOMOCF 00:08:46.768 #undef SPDK_CONFIG_DAOS 00:08:46.768 #define SPDK_CONFIG_DAOS_DIR 00:08:46.768 #define SPDK_CONFIG_DEBUG 1 00:08:46.768 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:46.768 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:46.768 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:46.768 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:46.768 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:46.768 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:46.768 #define SPDK_CONFIG_EXAMPLES 1 00:08:46.768 #undef SPDK_CONFIG_FC 00:08:46.768 #define SPDK_CONFIG_FC_PATH 00:08:46.768 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:46.768 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:46.768 #undef SPDK_CONFIG_FUSE 00:08:46.768 #define SPDK_CONFIG_FUZZER 1 00:08:46.768 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:08:46.768 #undef SPDK_CONFIG_GOLANG 00:08:46.768 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:46.768 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:46.768 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:46.768 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:46.768 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:46.768 #define SPDK_CONFIG_IDXD 1 00:08:46.768 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:46.768 #undef SPDK_CONFIG_IPSEC_MB 00:08:46.768 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:46.768 #define SPDK_CONFIG_ISAL 1 00:08:46.768 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:46.768 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:46.768 #define SPDK_CONFIG_LIBDIR 00:08:46.768 #undef SPDK_CONFIG_LTO 00:08:46.768 #define SPDK_CONFIG_MAX_LCORES 00:08:46.768 #define SPDK_CONFIG_NVME_CUSE 1 00:08:46.768 #undef SPDK_CONFIG_OCF 00:08:46.768 #define SPDK_CONFIG_OCF_PATH 00:08:46.768 #define SPDK_CONFIG_OPENSSL_PATH 00:08:46.768 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:46.768 #undef SPDK_CONFIG_PGO_USE 00:08:46.768 #define SPDK_CONFIG_PREFIX /usr/local 00:08:46.768 #undef SPDK_CONFIG_RAID5F 00:08:46.768 #undef SPDK_CONFIG_RBD 00:08:46.768 #define SPDK_CONFIG_RDMA 1 00:08:46.768 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:46.768 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:46.768 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:46.768 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:46.768 #undef SPDK_CONFIG_SHARED 00:08:46.768 #undef SPDK_CONFIG_SMA 00:08:46.768 #define SPDK_CONFIG_TESTS 1 00:08:46.768 #undef SPDK_CONFIG_TSAN 00:08:46.768 #define SPDK_CONFIG_UBLK 1 00:08:46.768 #define SPDK_CONFIG_UBSAN 1 00:08:46.768 #undef SPDK_CONFIG_UNIT_TESTS 00:08:46.768 #undef SPDK_CONFIG_URING 00:08:46.768 #define SPDK_CONFIG_URING_PATH 00:08:46.768 #undef SPDK_CONFIG_URING_ZNS 00:08:46.768 #undef SPDK_CONFIG_USDT 00:08:46.768 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:46.768 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:46.768 #define SPDK_CONFIG_VFIO_USER 1 00:08:46.768 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:46.768 #define SPDK_CONFIG_VHOST 1 00:08:46.768 #define SPDK_CONFIG_VIRTIO 1 00:08:46.768 #undef SPDK_CONFIG_VTUNE 00:08:46.769 #define SPDK_CONFIG_VTUNE_DIR 00:08:46.769 #define SPDK_CONFIG_WERROR 1 00:08:46.769 #define SPDK_CONFIG_WPDK_DIR 00:08:46.769 #undef SPDK_CONFIG_XNVME 00:08:46.769 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:46.769 14:28:29 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:46.769 14:28:29 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:46.769 14:28:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.769 14:28:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.769 14:28:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.769 14:28:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.769 14:28:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.769 14:28:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.769 14:28:29 -- paths/export.sh@5 -- # export PATH 00:08:46.769 14:28:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.769 14:28:29 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:46.769 14:28:29 -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:46.769 14:28:29 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:46.769 14:28:29 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:46.769 14:28:29 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:46.769 14:28:29 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:46.769 14:28:29 -- pm/common@16 -- # TEST_TAG=N/A 00:08:46.769 14:28:29 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:08:46.769 14:28:29 -- common/autotest_common.sh@52 -- # : 1 00:08:46.769 14:28:29 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:46.769 14:28:29 -- common/autotest_common.sh@56 -- # : 0 00:08:46.769 14:28:29 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:46.769 14:28:29 -- common/autotest_common.sh@58 -- # : 0 00:08:46.769 14:28:29 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:46.769 14:28:29 -- common/autotest_common.sh@60 -- # : 1 00:08:46.769 14:28:29 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:46.769 14:28:29 -- common/autotest_common.sh@62 -- # : 0 00:08:46.769 14:28:29 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:46.769 14:28:29 -- common/autotest_common.sh@64 -- # : 00:08:46.769 14:28:29 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:46.769 14:28:29 -- common/autotest_common.sh@66 -- # : 0 00:08:46.769 14:28:29 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:46.769 14:28:29 -- common/autotest_common.sh@68 -- # : 0 00:08:46.769 14:28:29 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:46.769 14:28:29 -- common/autotest_common.sh@70 -- # : 0 00:08:46.769 14:28:29 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:46.769 14:28:29 -- common/autotest_common.sh@72 -- # : 0 00:08:46.769 14:28:29 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:46.769 14:28:29 -- common/autotest_common.sh@74 -- # : 0 00:08:46.769 14:28:29 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:46.769 14:28:29 -- common/autotest_common.sh@76 -- # : 0 00:08:46.769 14:28:29 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:46.769 14:28:29 -- common/autotest_common.sh@78 -- # : 0 00:08:46.769 14:28:29 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:46.769 14:28:29 -- common/autotest_common.sh@80 -- # : 0 00:08:46.769 14:28:29 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:46.769 14:28:29 -- common/autotest_common.sh@82 -- # : 0 00:08:46.769 14:28:29 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:46.769 14:28:29 -- common/autotest_common.sh@84 -- # : 0 00:08:46.769 14:28:29 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:46.769 14:28:29 -- common/autotest_common.sh@86 -- # : 0 00:08:46.769 14:28:29 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:46.769 14:28:29 -- common/autotest_common.sh@88 -- # : 0 00:08:46.769 14:28:29 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:46.769 14:28:29 -- common/autotest_common.sh@90 -- # : 0 00:08:46.769 14:28:29 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:46.769 14:28:29 -- common/autotest_common.sh@92 -- # : 1 00:08:46.769 14:28:29 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:46.769 14:28:29 -- common/autotest_common.sh@94 -- # : 1 00:08:46.769 14:28:29 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:46.769 14:28:29 -- common/autotest_common.sh@96 -- # : rdma 00:08:46.769 14:28:29 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:46.769 14:28:29 -- common/autotest_common.sh@98 -- # : 0 00:08:46.769 14:28:29 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:46.769 14:28:29 -- common/autotest_common.sh@100 -- # : 0 00:08:46.769 14:28:29 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:46.769 14:28:29 -- common/autotest_common.sh@102 -- # : 0 00:08:46.769 14:28:29 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:46.769 14:28:29 -- common/autotest_common.sh@104 -- # : 0 00:08:46.769 14:28:29 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:46.769 14:28:29 -- common/autotest_common.sh@106 -- # : 0 00:08:46.769 14:28:29 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:46.769 14:28:29 -- common/autotest_common.sh@108 -- # : 0 00:08:46.769 14:28:29 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:46.769 14:28:29 -- common/autotest_common.sh@110 -- # : 0 00:08:46.769 14:28:29 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:46.769 14:28:29 -- common/autotest_common.sh@112 -- # : 0 00:08:46.769 14:28:29 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:46.769 14:28:29 -- common/autotest_common.sh@114 -- # : 0 00:08:46.769 14:28:29 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:46.769 14:28:29 -- common/autotest_common.sh@116 -- # : 1 00:08:46.769 14:28:29 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:46.769 14:28:29 -- common/autotest_common.sh@118 -- # : 00:08:46.769 14:28:29 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:46.769 14:28:29 -- common/autotest_common.sh@120 -- # : 0 00:08:46.769 14:28:29 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:46.769 14:28:29 -- common/autotest_common.sh@122 -- # : 0 00:08:46.769 14:28:29 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:46.770 14:28:29 -- common/autotest_common.sh@124 -- # : 0 00:08:46.770 14:28:29 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:46.770 14:28:29 -- common/autotest_common.sh@126 -- # : 0 00:08:46.770 14:28:29 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:46.770 14:28:29 -- common/autotest_common.sh@128 -- # : 0 00:08:46.770 14:28:29 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:46.770 14:28:29 -- common/autotest_common.sh@130 -- # : 0 00:08:46.770 14:28:29 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:46.770 14:28:29 -- common/autotest_common.sh@132 -- # : 00:08:46.770 14:28:29 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:46.770 14:28:29 -- common/autotest_common.sh@134 -- # : true 00:08:46.770 14:28:29 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:46.770 14:28:29 -- common/autotest_common.sh@136 -- # : 0 00:08:46.770 14:28:29 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:46.770 14:28:29 -- common/autotest_common.sh@138 -- # : 0 00:08:46.770 14:28:29 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:46.770 14:28:29 -- common/autotest_common.sh@140 -- # : 0 00:08:46.770 14:28:29 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:46.770 14:28:29 -- common/autotest_common.sh@142 -- # : 0 00:08:46.770 14:28:29 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:46.770 14:28:29 -- common/autotest_common.sh@144 -- # : 0 00:08:46.770 14:28:29 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:46.770 14:28:29 -- common/autotest_common.sh@146 -- # : 0 00:08:46.770 14:28:29 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:46.770 14:28:29 -- common/autotest_common.sh@148 -- # : 00:08:46.770 14:28:29 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:46.770 14:28:29 -- common/autotest_common.sh@150 -- # : 0 00:08:46.770 14:28:29 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:46.770 14:28:29 -- common/autotest_common.sh@152 -- # : 0 00:08:46.770 14:28:29 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:46.770 14:28:29 -- common/autotest_common.sh@154 -- # : 0 00:08:46.770 14:28:29 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:46.770 14:28:29 -- common/autotest_common.sh@156 -- # : 0 00:08:46.770 14:28:29 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:46.770 14:28:29 -- common/autotest_common.sh@158 -- # : 0 00:08:46.770 14:28:29 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:46.770 14:28:29 -- common/autotest_common.sh@160 -- # : 0 00:08:46.770 14:28:29 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:46.770 14:28:29 -- common/autotest_common.sh@163 -- # : 00:08:46.770 14:28:29 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:46.770 14:28:29 -- common/autotest_common.sh@165 -- # : 0 00:08:46.770 14:28:29 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:46.770 14:28:29 -- common/autotest_common.sh@167 -- # : 0 00:08:46.770 14:28:29 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:47.028 14:28:29 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:47.028 14:28:29 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:47.028 14:28:29 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:47.028 14:28:29 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:47.028 14:28:29 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:47.028 14:28:29 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:47.028 14:28:29 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:47.028 14:28:29 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:47.028 14:28:29 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:47.028 14:28:29 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:47.028 14:28:29 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:47.028 14:28:29 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:47.028 14:28:29 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:47.028 14:28:29 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:47.028 14:28:29 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:47.028 14:28:29 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:47.028 14:28:29 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:47.028 14:28:29 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:47.028 14:28:29 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:47.028 14:28:29 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:47.028 14:28:29 -- common/autotest_common.sh@196 -- # cat 00:08:47.028 14:28:29 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:47.028 14:28:29 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:47.028 14:28:29 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:47.028 14:28:29 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:47.028 14:28:29 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:47.028 14:28:29 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:47.028 14:28:29 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:47.028 14:28:29 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:47.028 14:28:29 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:47.028 14:28:29 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:47.028 14:28:29 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:47.028 14:28:29 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:47.029 14:28:29 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:47.029 14:28:29 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:47.029 14:28:29 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:47.029 14:28:29 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:47.029 14:28:29 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:47.029 14:28:29 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:47.029 14:28:29 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:47.029 14:28:29 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:08:47.029 14:28:29 -- common/autotest_common.sh@249 -- # export valgrind= 00:08:47.029 14:28:29 -- common/autotest_common.sh@249 -- # valgrind= 00:08:47.029 14:28:29 -- common/autotest_common.sh@255 -- # uname -s 00:08:47.029 14:28:29 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:08:47.029 14:28:29 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:08:47.029 14:28:29 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:08:47.029 14:28:29 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:08:47.029 14:28:29 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:47.029 14:28:29 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:47.029 14:28:29 -- common/autotest_common.sh@265 -- # MAKE=make 00:08:47.029 14:28:29 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j72 00:08:47.029 14:28:29 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:08:47.029 14:28:29 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:08:47.029 14:28:29 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:08:47.029 14:28:29 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:08:47.029 14:28:29 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:08:47.029 14:28:29 -- common/autotest_common.sh@309 -- # [[ -z 709808 ]] 00:08:47.029 14:28:29 -- common/autotest_common.sh@309 -- # kill -0 709808 00:08:47.029 14:28:29 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:08:47.029 14:28:29 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:08:47.029 14:28:29 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:08:47.029 14:28:29 -- common/autotest_common.sh@322 -- # local mount target_dir 00:08:47.029 14:28:29 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:08:47.029 14:28:29 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:08:47.029 14:28:29 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:08:47.029 14:28:29 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:08:47.029 14:28:29 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.zk0f4h 00:08:47.029 14:28:29 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:47.029 14:28:29 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:08:47.029 14:28:29 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:08:47.029 14:28:29 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.zk0f4h/tests/vfio /tmp/spdk.zk0f4h 00:08:47.029 14:28:29 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:08:47.029 14:28:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:47.029 14:28:29 -- common/autotest_common.sh@318 -- # df -T 00:08:47.029 14:28:29 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:08:47.029 14:28:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:08:47.029 14:28:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:08:47.029 14:28:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:08:47.029 14:28:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:08:47.029 14:28:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:08:47.029 14:28:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:47.029 14:28:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:08:47.029 14:28:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:08:47.029 14:28:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=722997248 00:08:47.029 14:28:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:08:47.029 14:28:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=4561432576 00:08:47.029 14:28:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:47.029 14:28:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:08:47.029 14:28:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:08:47.029 14:28:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=87169892352 00:08:47.029 14:28:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=94500270080 00:08:47.029 14:28:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=7330377728 00:08:47.029 14:28:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:47.029 14:28:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:47.029 14:28:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:47.029 14:28:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=47248875520 00:08:47.029 14:28:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=47250132992 00:08:47.029 14:28:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:08:47.029 14:28:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:47.029 14:28:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:47.029 14:28:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:47.029 14:28:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=18894151680 00:08:47.029 14:28:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=18900054016 00:08:47.029 14:28:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=5902336 00:08:47.029 14:28:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:47.029 14:28:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:47.029 14:28:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:47.029 14:28:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=47249698816 00:08:47.029 14:28:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=47250137088 00:08:47.029 14:28:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=438272 00:08:47.029 14:28:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:47.029 14:28:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:47.029 14:28:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:47.029 14:28:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=9450012672 00:08:47.029 14:28:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=9450024960 00:08:47.029 14:28:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=12288 00:08:47.029 14:28:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:47.029 14:28:29 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:08:47.029 * Looking for test storage... 00:08:47.029 14:28:29 -- common/autotest_common.sh@359 -- # local target_space new_size 00:08:47.029 14:28:29 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:08:47.029 14:28:29 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:47.029 14:28:29 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:47.029 14:28:29 -- common/autotest_common.sh@363 -- # mount=/ 00:08:47.029 14:28:29 -- common/autotest_common.sh@365 -- # target_space=87169892352 00:08:47.029 14:28:29 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:08:47.029 14:28:29 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:08:47.029 14:28:29 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:08:47.029 14:28:29 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:08:47.029 14:28:29 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:08:47.029 14:28:29 -- common/autotest_common.sh@372 -- # new_size=9544970240 00:08:47.029 14:28:29 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:47.029 14:28:29 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:47.029 14:28:29 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:47.029 14:28:29 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:47.029 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:47.029 14:28:29 -- common/autotest_common.sh@380 -- # return 0 00:08:47.029 14:28:29 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:08:47.029 14:28:29 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:08:47.029 14:28:29 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:47.029 14:28:29 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:47.029 14:28:29 -- common/autotest_common.sh@1672 -- # true 00:08:47.029 14:28:29 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:08:47.029 14:28:29 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:47.029 14:28:29 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:47.029 14:28:29 -- common/autotest_common.sh@27 -- # exec 00:08:47.029 14:28:29 -- common/autotest_common.sh@29 -- # exec 00:08:47.029 14:28:29 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:47.029 14:28:29 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:47.029 14:28:29 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:47.029 14:28:29 -- common/autotest_common.sh@18 -- # set -x 00:08:47.029 14:28:29 -- vfio/run.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:08:47.029 14:28:29 -- ../common.sh@8 -- # pids=() 00:08:47.029 14:28:29 -- vfio/run.sh@58 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:47.029 14:28:29 -- vfio/run.sh@59 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:47.029 14:28:29 -- vfio/run.sh@59 -- # fuzz_num=7 00:08:47.029 14:28:29 -- vfio/run.sh@60 -- # (( fuzz_num != 0 )) 00:08:47.029 14:28:29 -- vfio/run.sh@62 -- # trap 'cleanup /tmp/vfio-user-*; exit 1' SIGINT SIGTERM EXIT 00:08:47.029 14:28:29 -- vfio/run.sh@65 -- # mem_size=0 00:08:47.029 14:28:29 -- vfio/run.sh@66 -- # [[ 1 -eq 1 ]] 00:08:47.029 14:28:29 -- vfio/run.sh@67 -- # start_llvm_fuzz_short 7 1 00:08:47.029 14:28:29 -- ../common.sh@69 -- # local fuzz_num=7 00:08:47.029 14:28:29 -- ../common.sh@70 -- # local time=1 00:08:47.029 14:28:29 -- ../common.sh@72 -- # (( i = 0 )) 00:08:47.029 14:28:29 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:47.029 14:28:29 -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:08:47.029 14:28:29 -- vfio/run.sh@22 -- # local fuzzer_type=0 00:08:47.029 14:28:29 -- vfio/run.sh@23 -- # local timen=1 00:08:47.029 14:28:29 -- vfio/run.sh@24 -- # local core=0x1 00:08:47.029 14:28:29 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:47.029 14:28:29 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:08:47.029 14:28:29 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:08:47.029 14:28:29 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:08:47.029 14:28:29 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:08:47.029 14:28:29 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:47.029 14:28:29 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:08:47.029 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:47.029 14:28:29 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:08:47.029 [2024-10-01 14:28:29.393168] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:47.029 [2024-10-01 14:28:29.393234] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid709925 ] 00:08:47.029 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.029 [2024-10-01 14:28:29.469376] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.029 [2024-10-01 14:28:29.549574] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:47.029 [2024-10-01 14:28:29.549727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.287 INFO: Running with entropic power schedule (0xFF, 100). 00:08:47.287 INFO: Seed: 3848443664 00:08:47.287 INFO: Loaded 1 modules (341891 inline 8-bit counters): 341891 [0x27db80c, 0x282ef8f), 00:08:47.287 INFO: Loaded 1 PC tables (341891 PCs): 341891 [0x282ef90,0x2d667c0), 00:08:47.287 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:47.287 INFO: A corpus is not provided, starting from an empty corpus 00:08:47.287 #2 INITED exec/s: 0 rss: 61Mb 00:08:47.287 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:47.287 This may also happen if the target rejected all inputs we tried so far 00:08:47.802 NEW_FUNC[1/631]: 0x43a218 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:85 00:08:47.802 NEW_FUNC[2/631]: 0x43fdb8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:47.802 #5 NEW cov: 10762 ft: 10602 corp: 2/7b lim: 60 exec/s: 0 rss: 69Mb L: 6/6 MS: 3 ChangeByte-InsertByte-CMP- DE: "\001\000\001'"- 00:08:48.061 #6 NEW cov: 10776 ft: 14015 corp: 3/18b lim: 60 exec/s: 0 rss: 70Mb L: 11/11 MS: 1 CrossOver- 00:08:48.319 NEW_FUNC[1/1]: 0x191add8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:48.319 #7 NEW cov: 10793 ft: 15362 corp: 4/33b lim: 60 exec/s: 0 rss: 70Mb L: 15/15 MS: 1 PersAutoDict- DE: "\001\000\001'"- 00:08:48.319 #8 NEW cov: 10793 ft: 16057 corp: 5/45b lim: 60 exec/s: 8 rss: 70Mb L: 12/15 MS: 1 InsertByte- 00:08:48.577 #14 NEW cov: 10793 ft: 16986 corp: 6/94b lim: 60 exec/s: 14 rss: 71Mb L: 49/49 MS: 1 InsertRepeatedBytes- 00:08:48.834 #15 NEW cov: 10793 ft: 17120 corp: 7/117b lim: 60 exec/s: 15 rss: 71Mb L: 23/49 MS: 1 CrossOver- 00:08:49.091 #16 NEW cov: 10796 ft: 17269 corp: 8/128b lim: 60 exec/s: 16 rss: 71Mb L: 11/49 MS: 1 ChangeByte- 00:08:49.091 #17 NEW cov: 10803 ft: 17306 corp: 9/143b lim: 60 exec/s: 17 rss: 71Mb L: 15/49 MS: 1 ChangeBinInt- 00:08:49.350 #18 NEW cov: 10803 ft: 17441 corp: 10/154b lim: 60 exec/s: 9 rss: 71Mb L: 11/49 MS: 1 ChangeBinInt- 00:08:49.350 #18 DONE cov: 10803 ft: 17441 corp: 10/154b lim: 60 exec/s: 9 rss: 71Mb 00:08:49.350 ###### Recommended dictionary. ###### 00:08:49.350 "\001\000\001'" # Uses: 1 00:08:49.350 ###### End of recommended dictionary. ###### 00:08:49.350 Done 18 runs in 2 second(s) 00:08:49.609 14:28:32 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-0 00:08:49.609 14:28:32 -- ../common.sh@72 -- # (( i++ )) 00:08:49.609 14:28:32 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:49.609 14:28:32 -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:08:49.609 14:28:32 -- vfio/run.sh@22 -- # local fuzzer_type=1 00:08:49.609 14:28:32 -- vfio/run.sh@23 -- # local timen=1 00:08:49.609 14:28:32 -- vfio/run.sh@24 -- # local core=0x1 00:08:49.609 14:28:32 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:49.609 14:28:32 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:08:49.609 14:28:32 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:08:49.609 14:28:32 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:08:49.609 14:28:32 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:08:49.609 14:28:32 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:49.609 14:28:32 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:08:49.609 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:49.609 14:28:32 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:08:49.609 [2024-10-01 14:28:32.096082] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:49.609 [2024-10-01 14:28:32.096170] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid710298 ] 00:08:49.609 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.868 [2024-10-01 14:28:32.174236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.868 [2024-10-01 14:28:32.255094] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:49.868 [2024-10-01 14:28:32.255240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.127 INFO: Running with entropic power schedule (0xFF, 100). 00:08:50.127 INFO: Seed: 2258467940 00:08:50.127 INFO: Loaded 1 modules (341891 inline 8-bit counters): 341891 [0x27db80c, 0x282ef8f), 00:08:50.127 INFO: Loaded 1 PC tables (341891 PCs): 341891 [0x282ef90,0x2d667c0), 00:08:50.127 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:50.127 INFO: A corpus is not provided, starting from an empty corpus 00:08:50.127 #2 INITED exec/s: 0 rss: 61Mb 00:08:50.127 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:50.127 This may also happen if the target rejected all inputs we tried so far 00:08:50.127 [2024-10-01 14:28:32.562745] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:50.127 [2024-10-01 14:28:32.562785] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:50.127 [2024-10-01 14:28:32.562822] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:50.643 NEW_FUNC[1/638]: 0x43a7b8 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:72 00:08:50.643 NEW_FUNC[2/638]: 0x43fdb8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:50.643 #5 NEW cov: 10782 ft: 10602 corp: 2/23b lim: 40 exec/s: 0 rss: 69Mb L: 22/22 MS: 3 ChangeByte-InsertByte-InsertRepeatedBytes- 00:08:50.643 [2024-10-01 14:28:33.048582] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:50.643 [2024-10-01 14:28:33.048619] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:50.643 [2024-10-01 14:28:33.048638] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:50.901 #6 NEW cov: 10796 ft: 13869 corp: 3/31b lim: 40 exec/s: 0 rss: 70Mb L: 8/22 MS: 1 InsertRepeatedBytes- 00:08:50.901 [2024-10-01 14:28:33.253576] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:50.901 [2024-10-01 14:28:33.253603] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:50.901 [2024-10-01 14:28:33.253627] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:50.901 NEW_FUNC[1/1]: 0x191add8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:50.901 #7 NEW cov: 10813 ft: 15364 corp: 4/40b lim: 40 exec/s: 0 rss: 70Mb L: 9/22 MS: 1 CrossOver- 00:08:51.158 [2024-10-01 14:28:33.455055] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:51.158 [2024-10-01 14:28:33.455080] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:51.158 [2024-10-01 14:28:33.455100] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:51.158 #10 NEW cov: 10813 ft: 15727 corp: 5/51b lim: 40 exec/s: 10 rss: 70Mb L: 11/22 MS: 3 ShuffleBytes-ChangeByte-CrossOver- 00:08:51.158 [2024-10-01 14:28:33.648458] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:51.158 [2024-10-01 14:28:33.648481] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:51.158 [2024-10-01 14:28:33.648499] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:51.416 #11 NEW cov: 10813 ft: 16144 corp: 6/56b lim: 40 exec/s: 11 rss: 73Mb L: 5/22 MS: 1 EraseBytes- 00:08:51.416 [2024-10-01 14:28:33.841156] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:51.416 [2024-10-01 14:28:33.841178] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:51.416 [2024-10-01 14:28:33.841196] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:51.673 #16 NEW cov: 10813 ft: 16327 corp: 7/91b lim: 40 exec/s: 16 rss: 73Mb L: 35/35 MS: 5 ShuffleBytes-ChangeByte-ChangeByte-ChangeBit-InsertRepeatedBytes- 00:08:51.673 [2024-10-01 14:28:34.033123] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:51.673 [2024-10-01 14:28:34.033146] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:51.673 [2024-10-01 14:28:34.033164] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:51.673 #17 NEW cov: 10813 ft: 16659 corp: 8/103b lim: 40 exec/s: 17 rss: 73Mb L: 12/35 MS: 1 EraseBytes- 00:08:51.931 [2024-10-01 14:28:34.226083] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:51.931 [2024-10-01 14:28:34.226105] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:51.931 [2024-10-01 14:28:34.226122] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:51.931 #18 NEW cov: 10820 ft: 16738 corp: 9/120b lim: 40 exec/s: 18 rss: 73Mb L: 17/35 MS: 1 CMP- DE: "\200\000\000\000\000\000\000\000"- 00:08:51.931 [2024-10-01 14:28:34.415822] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:51.931 [2024-10-01 14:28:34.415845] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:51.931 [2024-10-01 14:28:34.415863] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:52.189 #19 NEW cov: 10820 ft: 16953 corp: 10/156b lim: 40 exec/s: 9 rss: 73Mb L: 36/36 MS: 1 InsertByte- 00:08:52.189 #19 DONE cov: 10820 ft: 16953 corp: 10/156b lim: 40 exec/s: 9 rss: 73Mb 00:08:52.189 ###### Recommended dictionary. ###### 00:08:52.189 "\200\000\000\000\000\000\000\000" # Uses: 0 00:08:52.189 ###### End of recommended dictionary. ###### 00:08:52.189 Done 19 runs in 2 second(s) 00:08:52.448 14:28:34 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-1 00:08:52.448 14:28:34 -- ../common.sh@72 -- # (( i++ )) 00:08:52.448 14:28:34 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:52.448 14:28:34 -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:08:52.448 14:28:34 -- vfio/run.sh@22 -- # local fuzzer_type=2 00:08:52.448 14:28:34 -- vfio/run.sh@23 -- # local timen=1 00:08:52.448 14:28:34 -- vfio/run.sh@24 -- # local core=0x1 00:08:52.448 14:28:34 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:52.448 14:28:34 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:08:52.448 14:28:34 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:08:52.448 14:28:34 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:08:52.448 14:28:34 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:08:52.448 14:28:34 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:52.448 14:28:34 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:08:52.448 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:52.448 14:28:34 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:08:52.448 [2024-10-01 14:28:34.843947] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:52.448 [2024-10-01 14:28:34.844017] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid710692 ] 00:08:52.448 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.448 [2024-10-01 14:28:34.922610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.706 [2024-10-01 14:28:35.005181] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:52.706 [2024-10-01 14:28:35.005342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.706 INFO: Running with entropic power schedule (0xFF, 100). 00:08:52.706 INFO: Seed: 713504595 00:08:52.706 INFO: Loaded 1 modules (341891 inline 8-bit counters): 341891 [0x27db80c, 0x282ef8f), 00:08:52.706 INFO: Loaded 1 PC tables (341891 PCs): 341891 [0x282ef90,0x2d667c0), 00:08:52.706 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:52.706 INFO: A corpus is not provided, starting from an empty corpus 00:08:52.706 #2 INITED exec/s: 0 rss: 62Mb 00:08:52.706 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:52.706 This may also happen if the target rejected all inputs we tried so far 00:08:52.964 [2024-10-01 14:28:35.317443] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:53.222 NEW_FUNC[1/636]: 0x43b1a8 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:104 00:08:53.222 NEW_FUNC[2/636]: 0x43fdb8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:53.222 #4 NEW cov: 10762 ft: 10723 corp: 2/71b lim: 80 exec/s: 0 rss: 69Mb L: 70/70 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:08:53.480 [2024-10-01 14:28:35.789136] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:53.480 #5 NEW cov: 10776 ft: 13905 corp: 3/142b lim: 80 exec/s: 0 rss: 70Mb L: 71/71 MS: 1 InsertByte- 00:08:53.480 [2024-10-01 14:28:35.990538] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:53.738 NEW_FUNC[1/1]: 0x191add8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:53.738 #6 NEW cov: 10793 ft: 15446 corp: 4/213b lim: 80 exec/s: 0 rss: 70Mb L: 71/71 MS: 1 ChangeBinInt- 00:08:53.738 [2024-10-01 14:28:36.184349] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:53.995 #7 NEW cov: 10793 ft: 15876 corp: 5/284b lim: 80 exec/s: 7 rss: 70Mb L: 71/71 MS: 1 ShuffleBytes- 00:08:53.995 [2024-10-01 14:28:36.377272] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:08:53.995 [2024-10-01 14:28:36.377307] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:08:53.995 NEW_FUNC[1/2]: 0x13176f8 in endpoint_id /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:638 00:08:53.995 NEW_FUNC[2/2]: 0x1317998 in vfio_user_log /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:3084 00:08:53.995 #9 NEW cov: 10806 ft: 16734 corp: 6/321b lim: 80 exec/s: 9 rss: 70Mb L: 37/71 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:08:54.254 [2024-10-01 14:28:36.588235] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:54.254 #15 NEW cov: 10806 ft: 17015 corp: 7/392b lim: 80 exec/s: 15 rss: 70Mb L: 71/71 MS: 1 ShuffleBytes- 00:08:54.512 [2024-10-01 14:28:36.778997] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:54.512 #16 NEW cov: 10806 ft: 17374 corp: 8/463b lim: 80 exec/s: 16 rss: 70Mb L: 71/71 MS: 1 ChangeByte- 00:08:54.512 [2024-10-01 14:28:36.971896] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:08:54.512 [2024-10-01 14:28:36.971926] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:08:54.770 #17 NEW cov: 10813 ft: 17600 corp: 9/491b lim: 80 exec/s: 17 rss: 70Mb L: 28/71 MS: 1 InsertRepeatedBytes- 00:08:54.770 [2024-10-01 14:28:37.176209] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:08:54.770 [2024-10-01 14:28:37.176244] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:08:54.770 #23 NEW cov: 10813 ft: 17677 corp: 10/519b lim: 80 exec/s: 11 rss: 71Mb L: 28/71 MS: 1 ChangeByte- 00:08:54.770 #23 DONE cov: 10813 ft: 17677 corp: 10/519b lim: 80 exec/s: 11 rss: 71Mb 00:08:54.770 Done 23 runs in 2 second(s) 00:08:55.337 14:28:37 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-2 00:08:55.337 14:28:37 -- ../common.sh@72 -- # (( i++ )) 00:08:55.337 14:28:37 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:55.337 14:28:37 -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:08:55.337 14:28:37 -- vfio/run.sh@22 -- # local fuzzer_type=3 00:08:55.337 14:28:37 -- vfio/run.sh@23 -- # local timen=1 00:08:55.337 14:28:37 -- vfio/run.sh@24 -- # local core=0x1 00:08:55.337 14:28:37 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:55.337 14:28:37 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:08:55.337 14:28:37 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:08:55.337 14:28:37 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:08:55.337 14:28:37 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:08:55.337 14:28:37 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:55.337 14:28:37 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:08:55.337 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:55.337 14:28:37 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:08:55.337 [2024-10-01 14:28:37.629848] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:55.337 [2024-10-01 14:28:37.629918] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid711086 ] 00:08:55.337 EAL: No free 2048 kB hugepages reported on node 1 00:08:55.337 [2024-10-01 14:28:37.706478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.337 [2024-10-01 14:28:37.787018] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:55.337 [2024-10-01 14:28:37.787181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.595 INFO: Running with entropic power schedule (0xFF, 100). 00:08:55.595 INFO: Seed: 3495517630 00:08:55.595 INFO: Loaded 1 modules (341891 inline 8-bit counters): 341891 [0x27db80c, 0x282ef8f), 00:08:55.595 INFO: Loaded 1 PC tables (341891 PCs): 341891 [0x282ef90,0x2d667c0), 00:08:55.595 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:55.595 INFO: A corpus is not provided, starting from an empty corpus 00:08:55.595 #2 INITED exec/s: 0 rss: 62Mb 00:08:55.595 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:55.595 This may also happen if the target rejected all inputs we tried so far 00:08:56.111 NEW_FUNC[1/632]: 0x43b898 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:125 00:08:56.111 NEW_FUNC[2/632]: 0x43fdb8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:56.111 #15 NEW cov: 10752 ft: 10717 corp: 2/45b lim: 320 exec/s: 0 rss: 68Mb L: 44/44 MS: 3 CopyPart-EraseBytes-InsertRepeatedBytes- 00:08:56.369 #16 NEW cov: 10766 ft: 13323 corp: 3/89b lim: 320 exec/s: 0 rss: 69Mb L: 44/44 MS: 1 ChangeByte- 00:08:56.627 NEW_FUNC[1/1]: 0x191add8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:56.627 #17 NEW cov: 10783 ft: 15044 corp: 4/133b lim: 320 exec/s: 0 rss: 70Mb L: 44/44 MS: 1 ChangeBinInt- 00:08:56.627 #18 NEW cov: 10783 ft: 15883 corp: 5/235b lim: 320 exec/s: 18 rss: 70Mb L: 102/102 MS: 1 InsertRepeatedBytes- 00:08:56.884 #21 NEW cov: 10783 ft: 16132 corp: 6/309b lim: 320 exec/s: 21 rss: 70Mb L: 74/102 MS: 3 InsertRepeatedBytes-ChangeBinInt-InsertRepeatedBytes- 00:08:57.142 #22 NEW cov: 10783 ft: 16395 corp: 7/439b lim: 320 exec/s: 22 rss: 70Mb L: 130/130 MS: 1 CrossOver- 00:08:57.449 #28 NEW cov: 10783 ft: 16445 corp: 8/613b lim: 320 exec/s: 28 rss: 70Mb L: 174/174 MS: 1 CrossOver- 00:08:57.449 #29 NEW cov: 10790 ft: 16612 corp: 9/657b lim: 320 exec/s: 29 rss: 71Mb L: 44/174 MS: 1 CrossOver- 00:08:57.735 #30 NEW cov: 10790 ft: 16678 corp: 10/804b lim: 320 exec/s: 15 rss: 71Mb L: 147/174 MS: 1 InsertRepeatedBytes- 00:08:57.735 #30 DONE cov: 10790 ft: 16678 corp: 10/804b lim: 320 exec/s: 15 rss: 71Mb 00:08:57.735 Done 30 runs in 2 second(s) 00:08:58.009 14:28:40 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-3 00:08:58.009 14:28:40 -- ../common.sh@72 -- # (( i++ )) 00:08:58.009 14:28:40 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:58.009 14:28:40 -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:08:58.009 14:28:40 -- vfio/run.sh@22 -- # local fuzzer_type=4 00:08:58.009 14:28:40 -- vfio/run.sh@23 -- # local timen=1 00:08:58.009 14:28:40 -- vfio/run.sh@24 -- # local core=0x1 00:08:58.009 14:28:40 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:58.009 14:28:40 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:08:58.009 14:28:40 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:08:58.009 14:28:40 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:08:58.009 14:28:40 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:08:58.009 14:28:40 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:58.009 14:28:40 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:08:58.009 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:58.009 14:28:40 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:08:58.009 [2024-10-01 14:28:40.423326] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:58.009 [2024-10-01 14:28:40.423392] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid711454 ] 00:08:58.009 EAL: No free 2048 kB hugepages reported on node 1 00:08:58.009 [2024-10-01 14:28:40.503136] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.278 [2024-10-01 14:28:40.588572] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:58.278 [2024-10-01 14:28:40.588771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.278 INFO: Running with entropic power schedule (0xFF, 100). 00:08:58.278 INFO: Seed: 2004568297 00:08:58.536 INFO: Loaded 1 modules (341891 inline 8-bit counters): 341891 [0x27db80c, 0x282ef8f), 00:08:58.536 INFO: Loaded 1 PC tables (341891 PCs): 341891 [0x282ef90,0x2d667c0), 00:08:58.536 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:58.536 INFO: A corpus is not provided, starting from an empty corpus 00:08:58.536 #2 INITED exec/s: 0 rss: 61Mb 00:08:58.536 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:58.536 This may also happen if the target rejected all inputs we tried so far 00:08:58.793 NEW_FUNC[1/632]: 0x43c118 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:145 00:08:58.793 NEW_FUNC[2/632]: 0x43fdb8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:58.793 #6 NEW cov: 10747 ft: 10692 corp: 2/102b lim: 320 exec/s: 0 rss: 68Mb L: 101/101 MS: 4 CopyPart-ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:08:59.051 #7 NEW cov: 10764 ft: 13411 corp: 3/203b lim: 320 exec/s: 0 rss: 69Mb L: 101/101 MS: 1 ShuffleBytes- 00:08:59.307 NEW_FUNC[1/1]: 0x191add8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:59.307 #13 NEW cov: 10781 ft: 15569 corp: 4/304b lim: 320 exec/s: 0 rss: 70Mb L: 101/101 MS: 1 CopyPart- 00:08:59.564 #14 NEW cov: 10781 ft: 15989 corp: 5/406b lim: 320 exec/s: 14 rss: 70Mb L: 102/102 MS: 1 CrossOver- 00:08:59.564 #15 NEW cov: 10781 ft: 16030 corp: 6/507b lim: 320 exec/s: 15 rss: 70Mb L: 101/102 MS: 1 CrossOver- 00:08:59.821 #16 NEW cov: 10781 ft: 16376 corp: 7/609b lim: 320 exec/s: 16 rss: 70Mb L: 102/102 MS: 1 CMP- DE: "\265\241\000\000\000\000\000\000"- 00:09:00.078 #22 NEW cov: 10781 ft: 16737 corp: 8/712b lim: 320 exec/s: 22 rss: 70Mb L: 103/103 MS: 1 InsertByte- 00:09:00.078 #23 NEW cov: 10781 ft: 17239 corp: 9/814b lim: 320 exec/s: 23 rss: 70Mb L: 102/103 MS: 1 ShuffleBytes- 00:09:00.335 #24 NEW cov: 10788 ft: 17416 corp: 10/916b lim: 320 exec/s: 24 rss: 71Mb L: 102/103 MS: 1 ChangeByte- 00:09:00.592 #25 NEW cov: 10788 ft: 17679 corp: 11/1018b lim: 320 exec/s: 12 rss: 71Mb L: 102/103 MS: 1 CrossOver- 00:09:00.592 #25 DONE cov: 10788 ft: 17679 corp: 11/1018b lim: 320 exec/s: 12 rss: 71Mb 00:09:00.592 ###### Recommended dictionary. ###### 00:09:00.592 "\265\241\000\000\000\000\000\000" # Uses: 0 00:09:00.592 ###### End of recommended dictionary. ###### 00:09:00.592 Done 25 runs in 2 second(s) 00:09:00.849 14:28:43 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-4 00:09:00.849 14:28:43 -- ../common.sh@72 -- # (( i++ )) 00:09:00.849 14:28:43 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:00.849 14:28:43 -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:09:00.849 14:28:43 -- vfio/run.sh@22 -- # local fuzzer_type=5 00:09:00.849 14:28:43 -- vfio/run.sh@23 -- # local timen=1 00:09:00.849 14:28:43 -- vfio/run.sh@24 -- # local core=0x1 00:09:00.849 14:28:43 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:09:00.849 14:28:43 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:09:00.849 14:28:43 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:09:00.849 14:28:43 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:09:00.849 14:28:43 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:09:00.849 14:28:43 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:09:00.849 14:28:43 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:09:00.849 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:00.849 14:28:43 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:09:00.849 [2024-10-01 14:28:43.277132] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:00.849 [2024-10-01 14:28:43.277200] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid711824 ] 00:09:00.849 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.849 [2024-10-01 14:28:43.352881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.106 [2024-10-01 14:28:43.435579] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:01.106 [2024-10-01 14:28:43.435732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.363 INFO: Running with entropic power schedule (0xFF, 100). 00:09:01.363 INFO: Seed: 559586737 00:09:01.363 INFO: Loaded 1 modules (341891 inline 8-bit counters): 341891 [0x27db80c, 0x282ef8f), 00:09:01.363 INFO: Loaded 1 PC tables (341891 PCs): 341891 [0x282ef90,0x2d667c0), 00:09:01.363 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:09:01.363 INFO: A corpus is not provided, starting from an empty corpus 00:09:01.363 #2 INITED exec/s: 0 rss: 62Mb 00:09:01.363 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:01.363 This may also happen if the target rejected all inputs we tried so far 00:09:01.363 [2024-10-01 14:28:43.750763] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:01.363 [2024-10-01 14:28:43.750812] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:01.926 NEW_FUNC[1/638]: 0x43cb18 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:172 00:09:01.926 NEW_FUNC[2/638]: 0x43fdb8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:01.926 #9 NEW cov: 10781 ft: 10673 corp: 2/77b lim: 120 exec/s: 0 rss: 68Mb L: 76/76 MS: 2 InsertByte-InsertRepeatedBytes- 00:09:01.926 [2024-10-01 14:28:44.240612] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:01.926 [2024-10-01 14:28:44.240660] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:01.926 #12 NEW cov: 10795 ft: 12649 corp: 3/186b lim: 120 exec/s: 0 rss: 69Mb L: 109/109 MS: 3 InsertByte-InsertByte-InsertRepeatedBytes- 00:09:01.926 [2024-10-01 14:28:44.431200] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:01.926 [2024-10-01 14:28:44.431231] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:02.183 NEW_FUNC[1/1]: 0x191add8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:02.183 #13 NEW cov: 10812 ft: 14857 corp: 4/262b lim: 120 exec/s: 0 rss: 70Mb L: 76/109 MS: 1 ChangeBit- 00:09:02.183 [2024-10-01 14:28:44.631590] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:02.183 [2024-10-01 14:28:44.631622] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:02.440 #14 NEW cov: 10815 ft: 15512 corp: 5/371b lim: 120 exec/s: 14 rss: 70Mb L: 109/109 MS: 1 ShuffleBytes- 00:09:02.440 [2024-10-01 14:28:44.822137] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:02.440 [2024-10-01 14:28:44.822168] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:02.440 #15 NEW cov: 10815 ft: 15950 corp: 6/481b lim: 120 exec/s: 15 rss: 70Mb L: 110/110 MS: 1 InsertByte- 00:09:02.697 [2024-10-01 14:28:45.011374] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:02.697 [2024-10-01 14:28:45.011405] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:02.697 #16 NEW cov: 10815 ft: 16305 corp: 7/590b lim: 120 exec/s: 16 rss: 70Mb L: 109/110 MS: 1 ChangeByte- 00:09:02.697 [2024-10-01 14:28:45.198987] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:02.697 [2024-10-01 14:28:45.199017] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:02.953 #18 NEW cov: 10815 ft: 16489 corp: 8/638b lim: 120 exec/s: 18 rss: 70Mb L: 48/110 MS: 2 InsertByte-CrossOver- 00:09:02.953 [2024-10-01 14:28:45.399691] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:02.953 [2024-10-01 14:28:45.399727] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:03.210 #19 NEW cov: 10822 ft: 16820 corp: 9/702b lim: 120 exec/s: 19 rss: 70Mb L: 64/110 MS: 1 EraseBytes- 00:09:03.210 [2024-10-01 14:28:45.589296] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:03.210 [2024-10-01 14:28:45.589327] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:03.210 #25 NEW cov: 10822 ft: 16944 corp: 10/807b lim: 120 exec/s: 12 rss: 70Mb L: 105/110 MS: 1 InsertRepeatedBytes- 00:09:03.210 #25 DONE cov: 10822 ft: 16944 corp: 10/807b lim: 120 exec/s: 12 rss: 70Mb 00:09:03.210 Done 25 runs in 2 second(s) 00:09:03.467 14:28:45 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-5 00:09:03.725 14:28:45 -- ../common.sh@72 -- # (( i++ )) 00:09:03.725 14:28:45 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:03.725 14:28:45 -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:09:03.725 14:28:45 -- vfio/run.sh@22 -- # local fuzzer_type=6 00:09:03.725 14:28:45 -- vfio/run.sh@23 -- # local timen=1 00:09:03.725 14:28:45 -- vfio/run.sh@24 -- # local core=0x1 00:09:03.725 14:28:45 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:09:03.725 14:28:45 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:09:03.725 14:28:45 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:09:03.725 14:28:45 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:09:03.725 14:28:45 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:09:03.725 14:28:45 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:09:03.725 14:28:46 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:09:03.725 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:03.725 14:28:46 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:09:03.725 [2024-10-01 14:28:46.031095] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:03.725 [2024-10-01 14:28:46.031174] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid712195 ] 00:09:03.725 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.725 [2024-10-01 14:28:46.105579] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.725 [2024-10-01 14:28:46.185939] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:03.725 [2024-10-01 14:28:46.186085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.982 INFO: Running with entropic power schedule (0xFF, 100). 00:09:03.982 INFO: Seed: 3306568997 00:09:03.982 INFO: Loaded 1 modules (341891 inline 8-bit counters): 341891 [0x27db80c, 0x282ef8f), 00:09:03.982 INFO: Loaded 1 PC tables (341891 PCs): 341891 [0x282ef90,0x2d667c0), 00:09:03.982 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:09:03.982 INFO: A corpus is not provided, starting from an empty corpus 00:09:03.982 #2 INITED exec/s: 0 rss: 62Mb 00:09:03.982 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:03.982 This may also happen if the target rejected all inputs we tried so far 00:09:03.982 [2024-10-01 14:28:46.491758] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:03.982 [2024-10-01 14:28:46.491808] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:04.497 NEW_FUNC[1/636]: 0x43d808 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:09:04.497 NEW_FUNC[2/636]: 0x43fdb8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:04.497 #3 NEW cov: 10660 ft: 10736 corp: 2/39b lim: 90 exec/s: 0 rss: 68Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:09:04.497 [2024-10-01 14:28:46.975417] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:04.497 [2024-10-01 14:28:46.975465] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:04.754 NEW_FUNC[1/2]: 0x443648 in read_complete /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:325 00:09:04.754 NEW_FUNC[2/2]: 0x4587d8 in bdev_malloc_readv /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/bdev/malloc/bdev_malloc.c:300 00:09:04.754 #4 NEW cov: 10790 ft: 13961 corp: 3/93b lim: 90 exec/s: 0 rss: 69Mb L: 54/54 MS: 1 InsertRepeatedBytes- 00:09:04.754 [2024-10-01 14:28:47.180636] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:04.754 [2024-10-01 14:28:47.180670] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:05.012 NEW_FUNC[1/1]: 0x191add8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:05.012 #5 NEW cov: 10807 ft: 15229 corp: 4/147b lim: 90 exec/s: 0 rss: 70Mb L: 54/54 MS: 1 CrossOver- 00:09:05.012 [2024-10-01 14:28:47.383169] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:05.012 [2024-10-01 14:28:47.383200] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:05.012 #6 NEW cov: 10807 ft: 15551 corp: 5/216b lim: 90 exec/s: 6 rss: 70Mb L: 69/69 MS: 1 InsertRepeatedBytes- 00:09:05.269 [2024-10-01 14:28:47.585748] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:05.269 [2024-10-01 14:28:47.585782] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:05.269 #7 NEW cov: 10807 ft: 16184 corp: 6/254b lim: 90 exec/s: 7 rss: 70Mb L: 38/69 MS: 1 ChangeByte- 00:09:05.269 [2024-10-01 14:28:47.790115] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:05.269 [2024-10-01 14:28:47.790147] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:05.526 #8 NEW cov: 10807 ft: 16540 corp: 7/309b lim: 90 exec/s: 8 rss: 70Mb L: 55/69 MS: 1 CrossOver- 00:09:05.526 [2024-10-01 14:28:47.998504] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:05.526 [2024-10-01 14:28:47.998533] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:05.784 #9 NEW cov: 10807 ft: 16828 corp: 8/347b lim: 90 exec/s: 9 rss: 70Mb L: 38/69 MS: 1 ChangeBit- 00:09:05.784 [2024-10-01 14:28:48.205959] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:05.784 [2024-10-01 14:28:48.205991] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:06.042 #10 NEW cov: 10814 ft: 16911 corp: 9/374b lim: 90 exec/s: 10 rss: 71Mb L: 27/69 MS: 1 CrossOver- 00:09:06.042 [2024-10-01 14:28:48.410809] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:06.042 [2024-10-01 14:28:48.410842] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:06.042 #11 NEW cov: 10814 ft: 16958 corp: 10/455b lim: 90 exec/s: 5 rss: 71Mb L: 81/81 MS: 1 InsertRepeatedBytes- 00:09:06.042 #11 DONE cov: 10814 ft: 16958 corp: 10/455b lim: 90 exec/s: 5 rss: 71Mb 00:09:06.042 Done 11 runs in 2 second(s) 00:09:06.301 14:28:48 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-6 00:09:06.558 14:28:48 -- ../common.sh@72 -- # (( i++ )) 00:09:06.558 14:28:48 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:06.558 14:28:48 -- vfio/run.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:09:06.558 00:09:06.558 real 0m19.664s 00:09:06.558 user 0m27.582s 00:09:06.558 sys 0m1.887s 00:09:06.558 14:28:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.558 14:28:48 -- common/autotest_common.sh@10 -- # set +x 00:09:06.558 ************************************ 00:09:06.558 END TEST vfio_fuzz 00:09:06.558 ************************************ 00:09:06.558 14:28:48 -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:09:06.558 00:09:06.558 real 1m27.691s 00:09:06.558 user 2m9.266s 00:09:06.558 sys 0m11.879s 00:09:06.558 14:28:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.558 14:28:48 -- common/autotest_common.sh@10 -- # set +x 00:09:06.558 ************************************ 00:09:06.558 END TEST llvm_fuzz 00:09:06.558 ************************************ 00:09:06.558 14:28:48 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:09:06.558 14:28:48 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:09:06.558 14:28:48 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:09:06.558 14:28:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:06.558 14:28:48 -- common/autotest_common.sh@10 -- # set +x 00:09:06.558 14:28:48 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:09:06.558 14:28:48 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:09:06.558 14:28:48 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:09:06.558 14:28:48 -- common/autotest_common.sh@10 -- # set +x 00:09:10.739 INFO: APP EXITING 00:09:10.739 INFO: killing all VMs 00:09:10.739 INFO: killing vhost app 00:09:10.739 INFO: EXIT DONE 00:09:14.022 Waiting for block devices as requested 00:09:14.022 0000:1a:00.0 (8086 0a54): vfio-pci -> nvme 00:09:14.022 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:14.022 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:14.022 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:14.022 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:14.022 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:14.022 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:14.022 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:14.281 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:14.281 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:14.281 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:14.539 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:14.540 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:14.540 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:14.798 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:14.798 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:14.798 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:21.363 Cleaning 00:09:21.363 Removing: /dev/shm/spdk_tgt_trace.pid681876 00:09:21.363 Removing: /var/run/dpdk/spdk_pid679497 00:09:21.363 Removing: /var/run/dpdk/spdk_pid680633 00:09:21.363 Removing: /var/run/dpdk/spdk_pid681876 00:09:21.363 Removing: /var/run/dpdk/spdk_pid682582 00:09:21.363 Removing: /var/run/dpdk/spdk_pid682829 00:09:21.363 Removing: /var/run/dpdk/spdk_pid683066 00:09:21.363 Removing: /var/run/dpdk/spdk_pid683316 00:09:21.363 Removing: /var/run/dpdk/spdk_pid683550 00:09:21.363 Removing: /var/run/dpdk/spdk_pid683753 00:09:21.363 Removing: /var/run/dpdk/spdk_pid683949 00:09:21.363 Removing: /var/run/dpdk/spdk_pid684176 00:09:21.363 Removing: /var/run/dpdk/spdk_pid684936 00:09:21.363 Removing: /var/run/dpdk/spdk_pid687474 00:09:21.363 Removing: /var/run/dpdk/spdk_pid687693 00:09:21.363 Removing: /var/run/dpdk/spdk_pid687911 00:09:21.363 Removing: /var/run/dpdk/spdk_pid688088 00:09:21.363 Removing: /var/run/dpdk/spdk_pid688483 00:09:21.363 Removing: /var/run/dpdk/spdk_pid688498 00:09:21.363 Removing: /var/run/dpdk/spdk_pid689059 00:09:21.363 Removing: /var/run/dpdk/spdk_pid689144 00:09:21.363 Removing: /var/run/dpdk/spdk_pid689451 00:09:21.363 Removing: /var/run/dpdk/spdk_pid689485 00:09:21.363 Removing: /var/run/dpdk/spdk_pid689678 00:09:21.363 Removing: /var/run/dpdk/spdk_pid689859 00:09:21.363 Removing: /var/run/dpdk/spdk_pid690301 00:09:21.363 Removing: /var/run/dpdk/spdk_pid690498 00:09:21.363 Removing: /var/run/dpdk/spdk_pid690662 00:09:21.363 Removing: /var/run/dpdk/spdk_pid690766 00:09:21.363 Removing: /var/run/dpdk/spdk_pid690982 00:09:21.363 Removing: /var/run/dpdk/spdk_pid691167 00:09:21.363 Removing: /var/run/dpdk/spdk_pid691231 00:09:21.363 Removing: /var/run/dpdk/spdk_pid691409 00:09:21.363 Removing: /var/run/dpdk/spdk_pid691606 00:09:21.363 Removing: /var/run/dpdk/spdk_pid691795 00:09:21.363 Removing: /var/run/dpdk/spdk_pid691989 00:09:21.363 Removing: /var/run/dpdk/spdk_pid692169 00:09:21.363 Removing: /var/run/dpdk/spdk_pid692371 00:09:21.363 Removing: /var/run/dpdk/spdk_pid692551 00:09:21.363 Removing: /var/run/dpdk/spdk_pid692744 00:09:21.363 Removing: /var/run/dpdk/spdk_pid692934 00:09:21.363 Removing: /var/run/dpdk/spdk_pid693166 00:09:21.363 Removing: /var/run/dpdk/spdk_pid693351 00:09:21.363 Removing: /var/run/dpdk/spdk_pid693581 00:09:21.363 Removing: /var/run/dpdk/spdk_pid693768 00:09:21.363 Removing: /var/run/dpdk/spdk_pid693994 00:09:21.363 Removing: /var/run/dpdk/spdk_pid694178 00:09:21.363 Removing: /var/run/dpdk/spdk_pid694413 00:09:21.363 Removing: /var/run/dpdk/spdk_pid694614 00:09:21.363 Removing: /var/run/dpdk/spdk_pid694813 00:09:21.363 Removing: /var/run/dpdk/spdk_pid694992 00:09:21.363 Removing: /var/run/dpdk/spdk_pid695201 00:09:21.363 Removing: /var/run/dpdk/spdk_pid695379 00:09:21.363 Removing: /var/run/dpdk/spdk_pid695574 00:09:21.363 Removing: /var/run/dpdk/spdk_pid695761 00:09:21.363 Removing: /var/run/dpdk/spdk_pid695960 00:09:21.363 Removing: /var/run/dpdk/spdk_pid696139 00:09:21.363 Removing: /var/run/dpdk/spdk_pid696343 00:09:21.363 Removing: /var/run/dpdk/spdk_pid696521 00:09:21.363 Removing: /var/run/dpdk/spdk_pid696722 00:09:21.363 Removing: /var/run/dpdk/spdk_pid696995 00:09:21.363 Removing: /var/run/dpdk/spdk_pid697221 00:09:21.363 Removing: /var/run/dpdk/spdk_pid697433 00:09:21.363 Removing: /var/run/dpdk/spdk_pid697937 00:09:21.363 Removing: /var/run/dpdk/spdk_pid698186 00:09:21.363 Removing: /var/run/dpdk/spdk_pid698389 00:09:21.363 Removing: /var/run/dpdk/spdk_pid698572 00:09:21.363 Removing: /var/run/dpdk/spdk_pid698776 00:09:21.363 Removing: /var/run/dpdk/spdk_pid698958 00:09:21.363 Removing: /var/run/dpdk/spdk_pid699157 00:09:21.363 Removing: /var/run/dpdk/spdk_pid699370 00:09:21.363 Removing: /var/run/dpdk/spdk_pid699609 00:09:21.363 Removing: /var/run/dpdk/spdk_pid699764 00:09:21.363 Removing: /var/run/dpdk/spdk_pid700014 00:09:21.363 Removing: /var/run/dpdk/spdk_pid700560 00:09:21.363 Removing: /var/run/dpdk/spdk_pid700925 00:09:21.363 Removing: /var/run/dpdk/spdk_pid701294 00:09:21.363 Removing: /var/run/dpdk/spdk_pid701662 00:09:21.363 Removing: /var/run/dpdk/spdk_pid702031 00:09:21.363 Removing: /var/run/dpdk/spdk_pid702401 00:09:21.363 Removing: /var/run/dpdk/spdk_pid702762 00:09:21.363 Removing: /var/run/dpdk/spdk_pid703132 00:09:21.363 Removing: /var/run/dpdk/spdk_pid703503 00:09:21.363 Removing: /var/run/dpdk/spdk_pid703867 00:09:21.363 Removing: /var/run/dpdk/spdk_pid704235 00:09:21.363 Removing: /var/run/dpdk/spdk_pid704605 00:09:21.363 Removing: /var/run/dpdk/spdk_pid704964 00:09:21.363 Removing: /var/run/dpdk/spdk_pid705334 00:09:21.363 Removing: /var/run/dpdk/spdk_pid705695 00:09:21.363 Removing: /var/run/dpdk/spdk_pid706065 00:09:21.363 Removing: /var/run/dpdk/spdk_pid706435 00:09:21.363 Removing: /var/run/dpdk/spdk_pid706801 00:09:21.363 Removing: /var/run/dpdk/spdk_pid707189 00:09:21.363 Removing: /var/run/dpdk/spdk_pid707562 00:09:21.363 Removing: /var/run/dpdk/spdk_pid707916 00:09:21.363 Removing: /var/run/dpdk/spdk_pid708284 00:09:21.363 Removing: /var/run/dpdk/spdk_pid708647 00:09:21.363 Removing: /var/run/dpdk/spdk_pid709015 00:09:21.363 Removing: /var/run/dpdk/spdk_pid709386 00:09:21.363 Removing: /var/run/dpdk/spdk_pid709925 00:09:21.363 Removing: /var/run/dpdk/spdk_pid710298 00:09:21.363 Removing: /var/run/dpdk/spdk_pid710692 00:09:21.363 Removing: /var/run/dpdk/spdk_pid711086 00:09:21.363 Removing: /var/run/dpdk/spdk_pid711454 00:09:21.363 Removing: /var/run/dpdk/spdk_pid711824 00:09:21.363 Removing: /var/run/dpdk/spdk_pid712195 00:09:21.363 Clean 00:09:21.363 killing process with pid 630848 00:09:23.298 killing process with pid 630845 00:09:23.298 killing process with pid 630847 00:09:23.298 killing process with pid 630846 00:09:23.298 14:29:05 -- common/autotest_common.sh@1436 -- # return 0 00:09:23.298 14:29:05 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:09:23.298 14:29:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:23.298 14:29:05 -- common/autotest_common.sh@10 -- # set +x 00:09:23.298 14:29:05 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:09:23.298 14:29:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:23.298 14:29:05 -- common/autotest_common.sh@10 -- # set +x 00:09:23.298 14:29:05 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:09:23.298 14:29:05 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:09:23.298 14:29:05 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:09:23.298 14:29:05 -- spdk/autotest.sh@394 -- # hash lcov 00:09:23.298 14:29:05 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:09:23.298 14:29:05 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:09:23.555 14:29:05 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:09:23.555 14:29:05 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.555 14:29:05 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.555 14:29:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.555 14:29:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.555 14:29:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.555 14:29:05 -- paths/export.sh@5 -- $ export PATH 00:09:23.555 14:29:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.555 14:29:05 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:09:23.555 14:29:05 -- common/autobuild_common.sh@440 -- $ date +%s 00:09:23.555 14:29:05 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1727785745.XXXXXX 00:09:23.555 14:29:05 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1727785745.AyXICT 00:09:23.555 14:29:05 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:09:23.555 14:29:05 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:09:23.555 14:29:05 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:09:23.555 14:29:05 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:09:23.555 14:29:05 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:09:23.556 14:29:05 -- common/autobuild_common.sh@456 -- $ get_config_params 00:09:23.556 14:29:05 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:09:23.556 14:29:05 -- common/autotest_common.sh@10 -- $ set +x 00:09:23.556 14:29:05 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:09:23.556 14:29:05 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j72 00:09:23.556 14:29:05 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:23.556 14:29:05 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:09:23.556 14:29:05 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:09:23.556 14:29:05 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:09:23.556 14:29:05 -- spdk/autopackage.sh@19 -- $ timing_finish 00:09:23.556 14:29:05 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:09:23.556 14:29:05 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:09:23.556 14:29:05 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:09:23.556 14:29:05 -- spdk/autopackage.sh@20 -- $ exit 0 00:09:23.556 + [[ -n 586955 ]] 00:09:23.556 + sudo kill 586955 00:09:23.564 [Pipeline] } 00:09:23.579 [Pipeline] // stage 00:09:23.584 [Pipeline] } 00:09:23.597 [Pipeline] // timeout 00:09:23.600 [Pipeline] } 00:09:23.612 [Pipeline] // catchError 00:09:23.617 [Pipeline] } 00:09:23.630 [Pipeline] // wrap 00:09:23.635 [Pipeline] } 00:09:23.645 [Pipeline] // catchError 00:09:23.653 [Pipeline] stage 00:09:23.654 [Pipeline] { (Epilogue) 00:09:23.666 [Pipeline] catchError 00:09:23.668 [Pipeline] { 00:09:23.679 [Pipeline] echo 00:09:23.680 Cleanup processes 00:09:23.684 [Pipeline] sh 00:09:23.962 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:23.962 630878 tee /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pm.log 00:09:23.962 719152 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:23.974 [Pipeline] sh 00:09:24.253 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:24.253 ++ grep -v 'sudo pgrep' 00:09:24.253 ++ awk '{print $1}' 00:09:24.253 + sudo kill -9 00:09:24.253 + true 00:09:24.263 [Pipeline] sh 00:09:24.544 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:09:25.492 [Pipeline] sh 00:09:25.774 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:09:25.774 Artifacts sizes are good 00:09:25.788 [Pipeline] archiveArtifacts 00:09:25.795 Archiving artifacts 00:09:25.871 [Pipeline] sh 00:09:26.154 + sudo chown -R sys_sgci: /var/jenkins/workspace/short-fuzz-phy-autotest 00:09:26.168 [Pipeline] cleanWs 00:09:26.177 [WS-CLEANUP] Deleting project workspace... 00:09:26.177 [WS-CLEANUP] Deferred wipeout is used... 00:09:26.184 [WS-CLEANUP] done 00:09:26.186 [Pipeline] } 00:09:26.205 [Pipeline] // catchError 00:09:26.217 [Pipeline] sh 00:09:26.502 + logger -p user.info -t JENKINS-CI 00:09:26.512 [Pipeline] } 00:09:26.525 [Pipeline] // stage 00:09:26.531 [Pipeline] } 00:09:26.545 [Pipeline] // node 00:09:26.551 [Pipeline] End of Pipeline 00:09:26.600 Finished: SUCCESS